00:00:00.001 Started by upstream project "autotest-nightly" build number 3886 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3266 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.136 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.136 The recommended git tool is: git 00:00:00.136 using credential 00000000-0000-0000-0000-000000000002 00:00:00.138 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.190 Fetching changes from the remote Git repository 00:00:00.192 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.241 Using shallow fetch with depth 1 00:00:00.241 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.241 > git --version # timeout=10 00:00:00.278 > git --version # 'git version 2.39.2' 00:00:00.278 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.301 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.301 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.788 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.798 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.809 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:06.809 > git config core.sparsecheckout # timeout=10 00:00:06.819 > git read-tree -mu HEAD # timeout=10 00:00:06.834 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:06.851 Commit message: "inventory: add WCP3 to free inventory" 00:00:06.851 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:06.928 [Pipeline] Start of Pipeline 00:00:06.939 [Pipeline] library 00:00:06.941 Loading library shm_lib@master 00:00:06.941 Library shm_lib@master is cached. Copying from home. 00:00:06.955 [Pipeline] node 00:00:06.962 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.964 [Pipeline] { 00:00:06.972 [Pipeline] catchError 00:00:06.973 [Pipeline] { 00:00:06.984 [Pipeline] wrap 00:00:06.990 [Pipeline] { 00:00:06.996 [Pipeline] stage 00:00:06.997 [Pipeline] { (Prologue) 00:00:07.143 [Pipeline] sh 00:00:07.428 + logger -p user.info -t JENKINS-CI 00:00:07.448 [Pipeline] echo 00:00:07.450 Node: GP11 00:00:07.456 [Pipeline] sh 00:00:07.768 [Pipeline] setCustomBuildProperty 00:00:07.779 [Pipeline] echo 00:00:07.780 Cleanup processes 00:00:07.785 [Pipeline] sh 00:00:08.068 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.068 3839864 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.081 [Pipeline] sh 00:00:08.363 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.363 ++ grep -v 'sudo pgrep' 00:00:08.363 ++ awk '{print $1}' 00:00:08.363 + sudo kill -9 00:00:08.363 + true 00:00:08.380 [Pipeline] cleanWs 00:00:08.392 [WS-CLEANUP] Deleting project workspace... 00:00:08.392 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.398 [WS-CLEANUP] done 00:00:08.402 [Pipeline] setCustomBuildProperty 00:00:08.414 [Pipeline] sh 00:00:08.694 + sudo git config --global --replace-all safe.directory '*' 00:00:08.801 [Pipeline] httpRequest 00:00:08.833 [Pipeline] echo 00:00:08.834 Sorcerer 10.211.164.101 is alive 00:00:08.842 [Pipeline] httpRequest 00:00:08.847 HttpMethod: GET 00:00:08.847 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.848 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.850 Response Code: HTTP/1.1 200 OK 00:00:08.850 Success: Status code 200 is in the accepted range: 200,404 00:00:08.851 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:09.969 [Pipeline] sh 00:00:10.254 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:10.271 [Pipeline] httpRequest 00:00:10.288 [Pipeline] echo 00:00:10.289 Sorcerer 10.211.164.101 is alive 00:00:10.295 [Pipeline] httpRequest 00:00:10.299 HttpMethod: GET 00:00:10.300 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:10.301 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:10.322 Response Code: HTTP/1.1 200 OK 00:00:10.322 Success: Status code 200 is in the accepted range: 200,404 00:00:10.323 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:06.834 [Pipeline] sh 00:01:07.121 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:09.664 [Pipeline] sh 00:01:09.949 + git -C spdk log --oneline -n5 00:01:09.949 719d03c6a sock/uring: only register net impl if supported 00:01:09.949 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:09.949 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:09.949 6c7c1f57e accel: add sequence outstanding stat 00:01:09.949 3bc8e6a26 accel: add utility to put task 00:01:09.963 [Pipeline] } 00:01:09.981 [Pipeline] // stage 00:01:09.990 [Pipeline] stage 00:01:09.992 [Pipeline] { (Prepare) 00:01:10.013 [Pipeline] writeFile 00:01:10.032 [Pipeline] sh 00:01:10.342 + logger -p user.info -t JENKINS-CI 00:01:10.356 [Pipeline] sh 00:01:10.640 + logger -p user.info -t JENKINS-CI 00:01:10.653 [Pipeline] sh 00:01:10.938 + cat autorun-spdk.conf 00:01:10.938 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.938 SPDK_TEST_NVMF=1 00:01:10.938 SPDK_TEST_NVME_CLI=1 00:01:10.938 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:10.938 SPDK_TEST_NVMF_NICS=e810 00:01:10.938 SPDK_RUN_ASAN=1 00:01:10.938 SPDK_RUN_UBSAN=1 00:01:10.938 NET_TYPE=phy 00:01:10.945 RUN_NIGHTLY=1 00:01:10.949 [Pipeline] readFile 00:01:10.976 [Pipeline] withEnv 00:01:10.978 [Pipeline] { 00:01:10.991 [Pipeline] sh 00:01:11.275 + set -ex 00:01:11.275 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:11.275 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:11.275 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.275 ++ SPDK_TEST_NVMF=1 00:01:11.275 ++ SPDK_TEST_NVME_CLI=1 00:01:11.275 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.275 ++ SPDK_TEST_NVMF_NICS=e810 00:01:11.275 ++ SPDK_RUN_ASAN=1 00:01:11.275 ++ SPDK_RUN_UBSAN=1 00:01:11.275 ++ NET_TYPE=phy 00:01:11.275 ++ RUN_NIGHTLY=1 00:01:11.275 + case $SPDK_TEST_NVMF_NICS in 00:01:11.275 + DRIVERS=ice 00:01:11.275 + [[ tcp == \r\d\m\a ]] 00:01:11.275 + [[ -n ice ]] 00:01:11.275 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:11.275 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:11.275 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:11.275 rmmod: ERROR: Module irdma is not currently loaded 00:01:11.275 rmmod: ERROR: Module i40iw is not currently loaded 00:01:11.275 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:11.275 + true 00:01:11.275 + for D in $DRIVERS 00:01:11.275 + sudo modprobe ice 00:01:11.275 + exit 0 00:01:11.284 [Pipeline] } 00:01:11.301 [Pipeline] // withEnv 00:01:11.306 [Pipeline] } 00:01:11.322 [Pipeline] // stage 00:01:11.331 [Pipeline] catchError 00:01:11.332 [Pipeline] { 00:01:11.347 [Pipeline] timeout 00:01:11.347 Timeout set to expire in 50 min 00:01:11.349 [Pipeline] { 00:01:11.363 [Pipeline] stage 00:01:11.365 [Pipeline] { (Tests) 00:01:11.380 [Pipeline] sh 00:01:11.663 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:11.663 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:11.663 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:11.663 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:11.663 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:11.663 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:11.663 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:11.663 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:11.663 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:11.663 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:11.663 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:11.663 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:11.663 + source /etc/os-release 00:01:11.663 ++ NAME='Fedora Linux' 00:01:11.663 ++ VERSION='38 (Cloud Edition)' 00:01:11.663 ++ ID=fedora 00:01:11.663 ++ VERSION_ID=38 00:01:11.663 ++ VERSION_CODENAME= 00:01:11.663 ++ PLATFORM_ID=platform:f38 00:01:11.663 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:11.663 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:11.663 ++ LOGO=fedora-logo-icon 00:01:11.663 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:11.663 ++ HOME_URL=https://fedoraproject.org/ 00:01:11.663 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:11.663 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:11.663 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:11.663 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:11.663 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:11.663 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:11.663 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:11.663 ++ SUPPORT_END=2024-05-14 00:01:11.663 ++ VARIANT='Cloud Edition' 00:01:11.663 ++ VARIANT_ID=cloud 00:01:11.663 + uname -a 00:01:11.663 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:11.663 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:12.600 Hugepages 00:01:12.600 node hugesize free / total 00:01:12.600 node0 1048576kB 0 / 0 00:01:12.600 node0 2048kB 0 / 0 00:01:12.600 node1 1048576kB 0 / 0 00:01:12.600 node1 2048kB 0 / 0 00:01:12.600 00:01:12.600 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:12.600 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:12.600 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:12.600 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:12.600 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:12.600 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:12.600 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:12.600 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:12.600 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:12.600 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:12.600 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:12.600 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:12.600 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:12.600 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:12.600 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:12.600 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:12.600 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:12.859 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:12.859 + rm -f /tmp/spdk-ld-path 00:01:12.859 + source autorun-spdk.conf 00:01:12.859 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.859 ++ SPDK_TEST_NVMF=1 00:01:12.859 ++ SPDK_TEST_NVME_CLI=1 00:01:12.859 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.859 ++ SPDK_TEST_NVMF_NICS=e810 00:01:12.859 ++ SPDK_RUN_ASAN=1 00:01:12.859 ++ SPDK_RUN_UBSAN=1 00:01:12.859 ++ NET_TYPE=phy 00:01:12.859 ++ RUN_NIGHTLY=1 00:01:12.859 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:12.859 + [[ -n '' ]] 00:01:12.859 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:12.859 + for M in /var/spdk/build-*-manifest.txt 00:01:12.859 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:12.859 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:12.859 + for M in /var/spdk/build-*-manifest.txt 00:01:12.859 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:12.859 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:12.859 ++ uname 00:01:12.859 + [[ Linux == \L\i\n\u\x ]] 00:01:12.859 + sudo dmesg -T 00:01:12.859 + sudo dmesg --clear 00:01:12.860 + dmesg_pid=3840553 00:01:12.860 + [[ Fedora Linux == FreeBSD ]] 00:01:12.860 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:12.860 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:12.860 + sudo dmesg -Tw 00:01:12.860 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:12.860 + [[ -x /usr/src/fio-static/fio ]] 00:01:12.860 + export FIO_BIN=/usr/src/fio-static/fio 00:01:12.860 + FIO_BIN=/usr/src/fio-static/fio 00:01:12.860 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:12.860 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:12.860 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:12.860 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:12.860 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:12.860 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:12.860 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:12.860 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:12.860 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:12.860 Test configuration: 00:01:12.860 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.860 SPDK_TEST_NVMF=1 00:01:12.860 SPDK_TEST_NVME_CLI=1 00:01:12.860 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.860 SPDK_TEST_NVMF_NICS=e810 00:01:12.860 SPDK_RUN_ASAN=1 00:01:12.860 SPDK_RUN_UBSAN=1 00:01:12.860 NET_TYPE=phy 00:01:12.860 RUN_NIGHTLY=1 21:44:32 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:12.860 21:44:32 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:12.860 21:44:32 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:12.860 21:44:32 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:12.860 21:44:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.860 21:44:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.860 21:44:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.860 21:44:32 -- paths/export.sh@5 -- $ export PATH 00:01:12.860 21:44:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:12.860 21:44:32 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:12.860 21:44:32 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:12.860 21:44:32 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720899872.XXXXXX 00:01:12.860 21:44:32 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720899872.KsliRA 00:01:12.860 21:44:32 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:12.860 21:44:32 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:12.860 21:44:32 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:12.860 21:44:32 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:12.860 21:44:32 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:12.860 21:44:32 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:12.860 21:44:32 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:12.860 21:44:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:12.860 21:44:32 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:12.860 21:44:32 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:12.860 21:44:32 -- pm/common@17 -- $ local monitor 00:01:12.860 21:44:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:12.860 21:44:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:12.860 21:44:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:12.860 21:44:32 -- pm/common@21 -- $ date +%s 00:01:12.860 21:44:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:12.860 21:44:32 -- pm/common@21 -- $ date +%s 00:01:12.860 21:44:32 -- pm/common@25 -- $ sleep 1 00:01:12.860 21:44:32 -- pm/common@21 -- $ date +%s 00:01:12.860 21:44:32 -- pm/common@21 -- $ date +%s 00:01:12.860 21:44:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720899872 00:01:12.860 21:44:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720899872 00:01:12.860 21:44:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720899872 00:01:12.860 21:44:32 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720899872 00:01:12.860 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720899872_collect-vmstat.pm.log 00:01:12.860 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720899872_collect-cpu-load.pm.log 00:01:12.860 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720899872_collect-cpu-temp.pm.log 00:01:12.860 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720899872_collect-bmc-pm.bmc.pm.log 00:01:14.242 21:44:33 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:14.242 21:44:33 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:14.242 21:44:33 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:14.242 21:44:33 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:14.242 21:44:33 -- spdk/autobuild.sh@16 -- $ date -u 00:01:14.242 Sat Jul 13 07:44:33 PM UTC 2024 00:01:14.242 21:44:33 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:14.242 v24.09-pre-202-g719d03c6a 00:01:14.242 21:44:33 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:14.242 21:44:33 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:14.242 21:44:33 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:14.242 21:44:33 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:14.242 21:44:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.242 ************************************ 00:01:14.242 START TEST asan 00:01:14.242 ************************************ 00:01:14.242 21:44:33 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:01:14.242 using asan 00:01:14.242 00:01:14.242 real 0m0.000s 00:01:14.242 user 0m0.000s 00:01:14.242 sys 0m0.000s 00:01:14.242 21:44:33 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:14.242 21:44:33 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:14.242 ************************************ 00:01:14.242 END TEST asan 00:01:14.242 ************************************ 00:01:14.242 21:44:33 -- common/autotest_common.sh@1142 -- $ return 0 00:01:14.242 21:44:33 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:14.242 21:44:33 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:14.242 21:44:33 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:14.242 21:44:33 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:14.242 21:44:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.242 ************************************ 00:01:14.242 START TEST ubsan 00:01:14.242 ************************************ 00:01:14.242 21:44:33 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:14.242 using ubsan 00:01:14.242 00:01:14.242 real 0m0.000s 00:01:14.242 user 0m0.000s 00:01:14.242 sys 0m0.000s 00:01:14.242 21:44:33 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:14.242 21:44:33 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:14.242 ************************************ 00:01:14.242 END TEST ubsan 00:01:14.242 ************************************ 00:01:14.242 21:44:33 -- common/autotest_common.sh@1142 -- $ return 0 00:01:14.242 21:44:33 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:14.242 21:44:33 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:14.242 21:44:33 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:14.242 21:44:33 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:14.242 21:44:33 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:14.242 21:44:33 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:14.242 21:44:33 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:14.242 21:44:33 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:14.242 21:44:33 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:14.242 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:14.242 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:14.503 Using 'verbs' RDMA provider 00:01:25.056 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:35.048 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:35.048 Creating mk/config.mk...done. 00:01:35.048 Creating mk/cc.flags.mk...done. 00:01:35.048 Type 'make' to build. 00:01:35.048 21:44:53 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:35.048 21:44:53 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:35.048 21:44:53 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:35.048 21:44:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:35.048 ************************************ 00:01:35.048 START TEST make 00:01:35.048 ************************************ 00:01:35.048 21:44:53 make -- common/autotest_common.sh@1123 -- $ make -j48 00:01:35.048 make[1]: Nothing to be done for 'all'. 00:01:43.189 The Meson build system 00:01:43.189 Version: 1.3.1 00:01:43.189 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:43.189 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:43.189 Build type: native build 00:01:43.189 Program cat found: YES (/usr/bin/cat) 00:01:43.189 Project name: DPDK 00:01:43.189 Project version: 24.03.0 00:01:43.189 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:43.189 C linker for the host machine: cc ld.bfd 2.39-16 00:01:43.189 Host machine cpu family: x86_64 00:01:43.189 Host machine cpu: x86_64 00:01:43.189 Message: ## Building in Developer Mode ## 00:01:43.189 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:43.189 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:43.189 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:43.189 Program python3 found: YES (/usr/bin/python3) 00:01:43.189 Program cat found: YES (/usr/bin/cat) 00:01:43.189 Compiler for C supports arguments -march=native: YES 00:01:43.189 Checking for size of "void *" : 8 00:01:43.189 Checking for size of "void *" : 8 (cached) 00:01:43.189 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:43.189 Library m found: YES 00:01:43.189 Library numa found: YES 00:01:43.189 Has header "numaif.h" : YES 00:01:43.189 Library fdt found: NO 00:01:43.189 Library execinfo found: NO 00:01:43.189 Has header "execinfo.h" : YES 00:01:43.189 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:43.189 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:43.189 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:43.189 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:43.189 Run-time dependency openssl found: YES 3.0.9 00:01:43.189 Run-time dependency libpcap found: YES 1.10.4 00:01:43.189 Has header "pcap.h" with dependency libpcap: YES 00:01:43.189 Compiler for C supports arguments -Wcast-qual: YES 00:01:43.189 Compiler for C supports arguments -Wdeprecated: YES 00:01:43.189 Compiler for C supports arguments -Wformat: YES 00:01:43.189 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:43.189 Compiler for C supports arguments -Wformat-security: NO 00:01:43.189 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:43.189 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:43.189 Compiler for C supports arguments -Wnested-externs: YES 00:01:43.189 Compiler for C supports arguments -Wold-style-definition: YES 00:01:43.189 Compiler for C supports arguments -Wpointer-arith: YES 00:01:43.189 Compiler for C supports arguments -Wsign-compare: YES 00:01:43.189 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:43.189 Compiler for C supports arguments -Wundef: YES 00:01:43.189 Compiler for C supports arguments -Wwrite-strings: YES 00:01:43.189 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:43.189 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:43.189 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:43.189 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:43.189 Program objdump found: YES (/usr/bin/objdump) 00:01:43.189 Compiler for C supports arguments -mavx512f: YES 00:01:43.189 Checking if "AVX512 checking" compiles: YES 00:01:43.189 Fetching value of define "__SSE4_2__" : 1 00:01:43.189 Fetching value of define "__AES__" : 1 00:01:43.189 Fetching value of define "__AVX__" : 1 00:01:43.189 Fetching value of define "__AVX2__" : (undefined) 00:01:43.189 Fetching value of define "__AVX512BW__" : (undefined) 00:01:43.189 Fetching value of define "__AVX512CD__" : (undefined) 00:01:43.189 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:43.189 Fetching value of define "__AVX512F__" : (undefined) 00:01:43.189 Fetching value of define "__AVX512VL__" : (undefined) 00:01:43.189 Fetching value of define "__PCLMUL__" : 1 00:01:43.189 Fetching value of define "__RDRND__" : 1 00:01:43.189 Fetching value of define "__RDSEED__" : (undefined) 00:01:43.189 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:43.189 Fetching value of define "__znver1__" : (undefined) 00:01:43.189 Fetching value of define "__znver2__" : (undefined) 00:01:43.189 Fetching value of define "__znver3__" : (undefined) 00:01:43.189 Fetching value of define "__znver4__" : (undefined) 00:01:43.189 Library asan found: YES 00:01:43.189 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:43.189 Message: lib/log: Defining dependency "log" 00:01:43.189 Message: lib/kvargs: Defining dependency "kvargs" 00:01:43.189 Message: lib/telemetry: Defining dependency "telemetry" 00:01:43.189 Library rt found: YES 00:01:43.189 Checking for function "getentropy" : NO 00:01:43.189 Message: lib/eal: Defining dependency "eal" 00:01:43.189 Message: lib/ring: Defining dependency "ring" 00:01:43.189 Message: lib/rcu: Defining dependency "rcu" 00:01:43.189 Message: lib/mempool: Defining dependency "mempool" 00:01:43.189 Message: lib/mbuf: Defining dependency "mbuf" 00:01:43.189 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:43.189 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:43.189 Compiler for C supports arguments -mpclmul: YES 00:01:43.189 Compiler for C supports arguments -maes: YES 00:01:43.189 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:43.189 Compiler for C supports arguments -mavx512bw: YES 00:01:43.189 Compiler for C supports arguments -mavx512dq: YES 00:01:43.189 Compiler for C supports arguments -mavx512vl: YES 00:01:43.189 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:43.189 Compiler for C supports arguments -mavx2: YES 00:01:43.189 Compiler for C supports arguments -mavx: YES 00:01:43.189 Message: lib/net: Defining dependency "net" 00:01:43.189 Message: lib/meter: Defining dependency "meter" 00:01:43.189 Message: lib/ethdev: Defining dependency "ethdev" 00:01:43.189 Message: lib/pci: Defining dependency "pci" 00:01:43.189 Message: lib/cmdline: Defining dependency "cmdline" 00:01:43.189 Message: lib/hash: Defining dependency "hash" 00:01:43.189 Message: lib/timer: Defining dependency "timer" 00:01:43.189 Message: lib/compressdev: Defining dependency "compressdev" 00:01:43.189 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:43.189 Message: lib/dmadev: Defining dependency "dmadev" 00:01:43.189 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:43.189 Message: lib/power: Defining dependency "power" 00:01:43.189 Message: lib/reorder: Defining dependency "reorder" 00:01:43.189 Message: lib/security: Defining dependency "security" 00:01:43.189 Has header "linux/userfaultfd.h" : YES 00:01:43.189 Has header "linux/vduse.h" : YES 00:01:43.189 Message: lib/vhost: Defining dependency "vhost" 00:01:43.189 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:43.189 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:43.189 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:43.189 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:43.189 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:43.189 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:43.189 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:43.189 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:43.189 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:43.189 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:43.189 Program doxygen found: YES (/usr/bin/doxygen) 00:01:43.189 Configuring doxy-api-html.conf using configuration 00:01:43.189 Configuring doxy-api-man.conf using configuration 00:01:43.189 Program mandb found: YES (/usr/bin/mandb) 00:01:43.189 Program sphinx-build found: NO 00:01:43.189 Configuring rte_build_config.h using configuration 00:01:43.189 Message: 00:01:43.189 ================= 00:01:43.189 Applications Enabled 00:01:43.189 ================= 00:01:43.189 00:01:43.189 apps: 00:01:43.189 00:01:43.189 00:01:43.189 Message: 00:01:43.189 ================= 00:01:43.189 Libraries Enabled 00:01:43.189 ================= 00:01:43.189 00:01:43.189 libs: 00:01:43.189 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:43.189 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:43.189 cryptodev, dmadev, power, reorder, security, vhost, 00:01:43.190 00:01:43.190 Message: 00:01:43.190 =============== 00:01:43.190 Drivers Enabled 00:01:43.190 =============== 00:01:43.190 00:01:43.190 common: 00:01:43.190 00:01:43.190 bus: 00:01:43.190 pci, vdev, 00:01:43.190 mempool: 00:01:43.190 ring, 00:01:43.190 dma: 00:01:43.190 00:01:43.190 net: 00:01:43.190 00:01:43.190 crypto: 00:01:43.190 00:01:43.190 compress: 00:01:43.190 00:01:43.190 vdpa: 00:01:43.190 00:01:43.190 00:01:43.190 Message: 00:01:43.190 ================= 00:01:43.190 Content Skipped 00:01:43.190 ================= 00:01:43.190 00:01:43.190 apps: 00:01:43.190 dumpcap: explicitly disabled via build config 00:01:43.190 graph: explicitly disabled via build config 00:01:43.190 pdump: explicitly disabled via build config 00:01:43.190 proc-info: explicitly disabled via build config 00:01:43.190 test-acl: explicitly disabled via build config 00:01:43.190 test-bbdev: explicitly disabled via build config 00:01:43.190 test-cmdline: explicitly disabled via build config 00:01:43.190 test-compress-perf: explicitly disabled via build config 00:01:43.190 test-crypto-perf: explicitly disabled via build config 00:01:43.190 test-dma-perf: explicitly disabled via build config 00:01:43.190 test-eventdev: explicitly disabled via build config 00:01:43.190 test-fib: explicitly disabled via build config 00:01:43.190 test-flow-perf: explicitly disabled via build config 00:01:43.190 test-gpudev: explicitly disabled via build config 00:01:43.190 test-mldev: explicitly disabled via build config 00:01:43.190 test-pipeline: explicitly disabled via build config 00:01:43.190 test-pmd: explicitly disabled via build config 00:01:43.190 test-regex: explicitly disabled via build config 00:01:43.190 test-sad: explicitly disabled via build config 00:01:43.190 test-security-perf: explicitly disabled via build config 00:01:43.190 00:01:43.190 libs: 00:01:43.190 argparse: explicitly disabled via build config 00:01:43.190 metrics: explicitly disabled via build config 00:01:43.190 acl: explicitly disabled via build config 00:01:43.190 bbdev: explicitly disabled via build config 00:01:43.190 bitratestats: explicitly disabled via build config 00:01:43.190 bpf: explicitly disabled via build config 00:01:43.190 cfgfile: explicitly disabled via build config 00:01:43.190 distributor: explicitly disabled via build config 00:01:43.190 efd: explicitly disabled via build config 00:01:43.190 eventdev: explicitly disabled via build config 00:01:43.190 dispatcher: explicitly disabled via build config 00:01:43.190 gpudev: explicitly disabled via build config 00:01:43.190 gro: explicitly disabled via build config 00:01:43.190 gso: explicitly disabled via build config 00:01:43.190 ip_frag: explicitly disabled via build config 00:01:43.190 jobstats: explicitly disabled via build config 00:01:43.190 latencystats: explicitly disabled via build config 00:01:43.190 lpm: explicitly disabled via build config 00:01:43.190 member: explicitly disabled via build config 00:01:43.190 pcapng: explicitly disabled via build config 00:01:43.190 rawdev: explicitly disabled via build config 00:01:43.190 regexdev: explicitly disabled via build config 00:01:43.190 mldev: explicitly disabled via build config 00:01:43.190 rib: explicitly disabled via build config 00:01:43.190 sched: explicitly disabled via build config 00:01:43.190 stack: explicitly disabled via build config 00:01:43.190 ipsec: explicitly disabled via build config 00:01:43.190 pdcp: explicitly disabled via build config 00:01:43.190 fib: explicitly disabled via build config 00:01:43.190 port: explicitly disabled via build config 00:01:43.190 pdump: explicitly disabled via build config 00:01:43.190 table: explicitly disabled via build config 00:01:43.190 pipeline: explicitly disabled via build config 00:01:43.190 graph: explicitly disabled via build config 00:01:43.190 node: explicitly disabled via build config 00:01:43.190 00:01:43.190 drivers: 00:01:43.190 common/cpt: not in enabled drivers build config 00:01:43.190 common/dpaax: not in enabled drivers build config 00:01:43.190 common/iavf: not in enabled drivers build config 00:01:43.190 common/idpf: not in enabled drivers build config 00:01:43.190 common/ionic: not in enabled drivers build config 00:01:43.190 common/mvep: not in enabled drivers build config 00:01:43.190 common/octeontx: not in enabled drivers build config 00:01:43.190 bus/auxiliary: not in enabled drivers build config 00:01:43.190 bus/cdx: not in enabled drivers build config 00:01:43.190 bus/dpaa: not in enabled drivers build config 00:01:43.190 bus/fslmc: not in enabled drivers build config 00:01:43.190 bus/ifpga: not in enabled drivers build config 00:01:43.190 bus/platform: not in enabled drivers build config 00:01:43.190 bus/uacce: not in enabled drivers build config 00:01:43.190 bus/vmbus: not in enabled drivers build config 00:01:43.190 common/cnxk: not in enabled drivers build config 00:01:43.190 common/mlx5: not in enabled drivers build config 00:01:43.190 common/nfp: not in enabled drivers build config 00:01:43.190 common/nitrox: not in enabled drivers build config 00:01:43.190 common/qat: not in enabled drivers build config 00:01:43.190 common/sfc_efx: not in enabled drivers build config 00:01:43.190 mempool/bucket: not in enabled drivers build config 00:01:43.190 mempool/cnxk: not in enabled drivers build config 00:01:43.190 mempool/dpaa: not in enabled drivers build config 00:01:43.190 mempool/dpaa2: not in enabled drivers build config 00:01:43.190 mempool/octeontx: not in enabled drivers build config 00:01:43.190 mempool/stack: not in enabled drivers build config 00:01:43.190 dma/cnxk: not in enabled drivers build config 00:01:43.190 dma/dpaa: not in enabled drivers build config 00:01:43.190 dma/dpaa2: not in enabled drivers build config 00:01:43.190 dma/hisilicon: not in enabled drivers build config 00:01:43.190 dma/idxd: not in enabled drivers build config 00:01:43.190 dma/ioat: not in enabled drivers build config 00:01:43.190 dma/skeleton: not in enabled drivers build config 00:01:43.190 net/af_packet: not in enabled drivers build config 00:01:43.190 net/af_xdp: not in enabled drivers build config 00:01:43.190 net/ark: not in enabled drivers build config 00:01:43.190 net/atlantic: not in enabled drivers build config 00:01:43.190 net/avp: not in enabled drivers build config 00:01:43.190 net/axgbe: not in enabled drivers build config 00:01:43.190 net/bnx2x: not in enabled drivers build config 00:01:43.190 net/bnxt: not in enabled drivers build config 00:01:43.190 net/bonding: not in enabled drivers build config 00:01:43.190 net/cnxk: not in enabled drivers build config 00:01:43.190 net/cpfl: not in enabled drivers build config 00:01:43.190 net/cxgbe: not in enabled drivers build config 00:01:43.190 net/dpaa: not in enabled drivers build config 00:01:43.190 net/dpaa2: not in enabled drivers build config 00:01:43.190 net/e1000: not in enabled drivers build config 00:01:43.190 net/ena: not in enabled drivers build config 00:01:43.190 net/enetc: not in enabled drivers build config 00:01:43.190 net/enetfec: not in enabled drivers build config 00:01:43.190 net/enic: not in enabled drivers build config 00:01:43.190 net/failsafe: not in enabled drivers build config 00:01:43.190 net/fm10k: not in enabled drivers build config 00:01:43.190 net/gve: not in enabled drivers build config 00:01:43.190 net/hinic: not in enabled drivers build config 00:01:43.190 net/hns3: not in enabled drivers build config 00:01:43.190 net/i40e: not in enabled drivers build config 00:01:43.190 net/iavf: not in enabled drivers build config 00:01:43.190 net/ice: not in enabled drivers build config 00:01:43.190 net/idpf: not in enabled drivers build config 00:01:43.190 net/igc: not in enabled drivers build config 00:01:43.190 net/ionic: not in enabled drivers build config 00:01:43.190 net/ipn3ke: not in enabled drivers build config 00:01:43.190 net/ixgbe: not in enabled drivers build config 00:01:43.190 net/mana: not in enabled drivers build config 00:01:43.190 net/memif: not in enabled drivers build config 00:01:43.190 net/mlx4: not in enabled drivers build config 00:01:43.190 net/mlx5: not in enabled drivers build config 00:01:43.190 net/mvneta: not in enabled drivers build config 00:01:43.190 net/mvpp2: not in enabled drivers build config 00:01:43.190 net/netvsc: not in enabled drivers build config 00:01:43.190 net/nfb: not in enabled drivers build config 00:01:43.190 net/nfp: not in enabled drivers build config 00:01:43.190 net/ngbe: not in enabled drivers build config 00:01:43.190 net/null: not in enabled drivers build config 00:01:43.190 net/octeontx: not in enabled drivers build config 00:01:43.190 net/octeon_ep: not in enabled drivers build config 00:01:43.190 net/pcap: not in enabled drivers build config 00:01:43.190 net/pfe: not in enabled drivers build config 00:01:43.190 net/qede: not in enabled drivers build config 00:01:43.190 net/ring: not in enabled drivers build config 00:01:43.190 net/sfc: not in enabled drivers build config 00:01:43.190 net/softnic: not in enabled drivers build config 00:01:43.190 net/tap: not in enabled drivers build config 00:01:43.190 net/thunderx: not in enabled drivers build config 00:01:43.190 net/txgbe: not in enabled drivers build config 00:01:43.190 net/vdev_netvsc: not in enabled drivers build config 00:01:43.190 net/vhost: not in enabled drivers build config 00:01:43.190 net/virtio: not in enabled drivers build config 00:01:43.190 net/vmxnet3: not in enabled drivers build config 00:01:43.190 raw/*: missing internal dependency, "rawdev" 00:01:43.190 crypto/armv8: not in enabled drivers build config 00:01:43.190 crypto/bcmfs: not in enabled drivers build config 00:01:43.190 crypto/caam_jr: not in enabled drivers build config 00:01:43.190 crypto/ccp: not in enabled drivers build config 00:01:43.190 crypto/cnxk: not in enabled drivers build config 00:01:43.190 crypto/dpaa_sec: not in enabled drivers build config 00:01:43.190 crypto/dpaa2_sec: not in enabled drivers build config 00:01:43.190 crypto/ipsec_mb: not in enabled drivers build config 00:01:43.190 crypto/mlx5: not in enabled drivers build config 00:01:43.190 crypto/mvsam: not in enabled drivers build config 00:01:43.190 crypto/nitrox: not in enabled drivers build config 00:01:43.190 crypto/null: not in enabled drivers build config 00:01:43.190 crypto/octeontx: not in enabled drivers build config 00:01:43.190 crypto/openssl: not in enabled drivers build config 00:01:43.190 crypto/scheduler: not in enabled drivers build config 00:01:43.190 crypto/uadk: not in enabled drivers build config 00:01:43.190 crypto/virtio: not in enabled drivers build config 00:01:43.190 compress/isal: not in enabled drivers build config 00:01:43.190 compress/mlx5: not in enabled drivers build config 00:01:43.190 compress/nitrox: not in enabled drivers build config 00:01:43.190 compress/octeontx: not in enabled drivers build config 00:01:43.190 compress/zlib: not in enabled drivers build config 00:01:43.190 regex/*: missing internal dependency, "regexdev" 00:01:43.190 ml/*: missing internal dependency, "mldev" 00:01:43.190 vdpa/ifc: not in enabled drivers build config 00:01:43.190 vdpa/mlx5: not in enabled drivers build config 00:01:43.191 vdpa/nfp: not in enabled drivers build config 00:01:43.191 vdpa/sfc: not in enabled drivers build config 00:01:43.191 event/*: missing internal dependency, "eventdev" 00:01:43.191 baseband/*: missing internal dependency, "bbdev" 00:01:43.191 gpu/*: missing internal dependency, "gpudev" 00:01:43.191 00:01:43.191 00:01:43.450 Build targets in project: 85 00:01:43.450 00:01:43.450 DPDK 24.03.0 00:01:43.450 00:01:43.450 User defined options 00:01:43.450 buildtype : debug 00:01:43.450 default_library : shared 00:01:43.450 libdir : lib 00:01:43.450 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:43.450 b_sanitize : address 00:01:43.450 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:43.450 c_link_args : 00:01:43.450 cpu_instruction_set: native 00:01:43.450 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:43.450 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:43.450 enable_docs : false 00:01:43.450 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:43.450 enable_kmods : false 00:01:43.450 max_lcores : 128 00:01:43.450 tests : false 00:01:43.450 00:01:43.450 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:43.716 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:43.716 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:43.991 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:43.991 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:43.991 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:43.991 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:43.991 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:43.991 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:43.991 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:43.991 [9/268] Linking static target lib/librte_kvargs.a 00:01:43.991 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:43.991 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:43.991 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:43.991 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:43.991 [14/268] Linking static target lib/librte_log.a 00:01:43.991 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:43.991 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:44.570 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.835 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:44.835 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:44.835 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:44.835 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:44.835 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:44.835 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:44.835 [24/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:44.835 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:44.835 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:44.835 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:44.835 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:44.835 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:44.835 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:44.835 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:44.835 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:44.835 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:44.835 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:44.835 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:44.835 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:44.835 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:44.835 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:44.836 [39/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:44.836 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:44.836 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:44.836 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:44.836 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:44.836 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:44.836 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:44.836 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:44.836 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:44.836 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:44.836 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:44.836 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:44.836 [51/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:44.836 [52/268] Linking static target lib/librte_telemetry.a 00:01:44.836 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:45.095 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:45.095 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:45.095 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:45.095 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:45.095 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:45.095 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:45.095 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:45.095 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:45.095 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:45.095 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:45.356 [64/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.356 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:45.356 [66/268] Linking target lib/librte_log.so.24.1 00:01:45.616 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:45.616 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:45.616 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:45.616 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:45.616 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:45.616 [72/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:45.616 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:45.616 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:45.616 [75/268] Linking static target lib/librte_pci.a 00:01:45.881 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:45.881 [77/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:45.881 [78/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:45.881 [79/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:45.881 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:45.881 [81/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:45.881 [82/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:45.881 [83/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:45.881 [84/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:45.881 [85/268] Linking static target lib/librte_ring.a 00:01:45.881 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:45.881 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:45.881 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:45.881 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:45.881 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:45.881 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:45.881 [92/268] Linking target lib/librte_kvargs.so.24.1 00:01:45.881 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:45.881 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:45.881 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:45.881 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:45.881 [97/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:45.881 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:45.881 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:45.881 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:45.881 [101/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:45.881 [102/268] Linking static target lib/librte_meter.a 00:01:45.881 [103/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:45.881 [104/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.143 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:46.143 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:46.143 [107/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:46.143 [108/268] Linking target lib/librte_telemetry.so.24.1 00:01:46.143 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:46.143 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:46.143 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:46.143 [112/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:46.143 [113/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:46.143 [114/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:46.143 [115/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.143 [116/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:46.143 [117/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:46.406 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:46.406 [119/268] Linking static target lib/librte_mempool.a 00:01:46.406 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:46.406 [121/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:46.406 [122/268] Linking static target lib/librte_rcu.a 00:01:46.406 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:46.406 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:46.406 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:46.406 [126/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:46.406 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:46.406 [128/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.406 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:46.406 [130/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:46.669 [131/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.669 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:46.669 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:46.669 [134/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:46.669 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:46.669 [136/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:46.669 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:46.669 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:46.669 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:46.929 [140/268] Linking static target lib/librte_cmdline.a 00:01:46.929 [141/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:46.929 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:46.929 [143/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:46.929 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:46.929 [145/268] Linking static target lib/librte_eal.a 00:01:46.929 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:46.929 [147/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:46.929 [148/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:46.929 [149/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:46.929 [150/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.929 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:46.929 [152/268] Linking static target lib/librte_timer.a 00:01:47.188 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:47.188 [154/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:47.188 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:47.188 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:47.188 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:47.188 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:47.188 [159/268] Linking static target lib/librte_dmadev.a 00:01:47.448 [160/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.448 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:47.448 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.448 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:47.448 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:47.707 [165/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:47.707 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:47.707 [167/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:47.707 [168/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:47.707 [169/268] Linking static target lib/librte_net.a 00:01:47.707 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:47.707 [171/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.707 [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:47.707 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:47.707 [174/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:47.707 [175/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:47.707 [176/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:47.707 [177/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:47.707 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:47.707 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.967 [180/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:47.967 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:47.967 [182/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:47.967 [183/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.967 [184/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:47.967 [185/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:47.967 [186/268] Linking static target lib/librte_power.a 00:01:47.967 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:47.967 [188/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:47.967 [189/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:47.967 [190/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:47.967 [191/268] Linking static target drivers/librte_bus_vdev.a 00:01:47.967 [192/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:48.225 [193/268] Linking static target lib/librte_compressdev.a 00:01:48.226 [194/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:48.226 [195/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:48.226 [196/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:48.226 [197/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:48.226 [198/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:48.226 [199/268] Linking static target drivers/librte_bus_pci.a 00:01:48.226 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:48.226 [201/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.226 [202/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:48.226 [203/268] Linking static target lib/librte_hash.a 00:01:48.226 [204/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:48.226 [205/268] Linking static target lib/librte_reorder.a 00:01:48.485 [206/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:48.485 [207/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:48.485 [208/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:48.485 [209/268] Linking static target drivers/librte_mempool_ring.a 00:01:48.485 [210/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.485 [211/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.485 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:48.743 [213/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.743 [214/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.743 [215/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.309 [216/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:49.309 [217/268] Linking static target lib/librte_security.a 00:01:49.309 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:49.567 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.134 [220/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:50.134 [221/268] Linking static target lib/librte_mbuf.a 00:01:50.392 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:50.392 [223/268] Linking static target lib/librte_cryptodev.a 00:01:50.655 [224/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.221 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:51.221 [226/268] Linking static target lib/librte_ethdev.a 00:01:51.479 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.412 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.701 [229/268] Linking target lib/librte_eal.so.24.1 00:01:52.701 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:52.701 [231/268] Linking target lib/librte_pci.so.24.1 00:01:52.701 [232/268] Linking target lib/librte_ring.so.24.1 00:01:52.701 [233/268] Linking target lib/librte_timer.so.24.1 00:01:52.701 [234/268] Linking target lib/librte_dmadev.so.24.1 00:01:52.701 [235/268] Linking target lib/librte_meter.so.24.1 00:01:52.701 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:52.959 [237/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:52.959 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:52.959 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:52.959 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:52.959 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:52.959 [242/268] Linking target lib/librte_rcu.so.24.1 00:01:52.959 [243/268] Linking target lib/librte_mempool.so.24.1 00:01:52.959 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:53.217 [245/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:53.217 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:53.217 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:53.217 [248/268] Linking target lib/librte_mbuf.so.24.1 00:01:53.217 [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:53.217 [250/268] Linking target lib/librte_reorder.so.24.1 00:01:53.217 [251/268] Linking target lib/librte_compressdev.so.24.1 00:01:53.217 [252/268] Linking target lib/librte_net.so.24.1 00:01:53.217 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:01:53.474 [254/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:53.474 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:53.474 [256/268] Linking target lib/librte_cmdline.so.24.1 00:01:53.474 [257/268] Linking target lib/librte_hash.so.24.1 00:01:53.474 [258/268] Linking target lib/librte_security.so.24.1 00:01:53.732 [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:54.296 [260/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:55.672 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.672 [262/268] Linking target lib/librte_ethdev.so.24.1 00:01:55.931 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:55.931 [264/268] Linking target lib/librte_power.so.24.1 00:02:17.854 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:17.854 [266/268] Linking static target lib/librte_vhost.a 00:02:18.112 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.371 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:18.371 INFO: autodetecting backend as ninja 00:02:18.371 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:19.307 CC lib/ut_mock/mock.o 00:02:19.307 CC lib/ut/ut.o 00:02:19.307 CC lib/log/log.o 00:02:19.307 CC lib/log/log_flags.o 00:02:19.307 CC lib/log/log_deprecated.o 00:02:19.566 LIB libspdk_ut.a 00:02:19.566 LIB libspdk_log.a 00:02:19.566 LIB libspdk_ut_mock.a 00:02:19.566 SO libspdk_ut.so.2.0 00:02:19.566 SO libspdk_log.so.7.0 00:02:19.566 SO libspdk_ut_mock.so.6.0 00:02:19.566 SYMLINK libspdk_ut.so 00:02:19.566 SYMLINK libspdk_ut_mock.so 00:02:19.566 SYMLINK libspdk_log.so 00:02:19.824 CC lib/ioat/ioat.o 00:02:19.824 CC lib/dma/dma.o 00:02:19.824 CC lib/util/base64.o 00:02:19.824 CC lib/util/bit_array.o 00:02:19.824 CXX lib/trace_parser/trace.o 00:02:19.824 CC lib/util/cpuset.o 00:02:19.824 CC lib/util/crc16.o 00:02:19.824 CC lib/util/crc32.o 00:02:19.824 CC lib/util/crc32c.o 00:02:19.824 CC lib/util/crc32_ieee.o 00:02:19.824 CC lib/util/crc64.o 00:02:19.825 CC lib/util/dif.o 00:02:19.825 CC lib/util/fd.o 00:02:19.825 CC lib/util/file.o 00:02:19.825 CC lib/util/hexlify.o 00:02:19.825 CC lib/util/iov.o 00:02:19.825 CC lib/util/math.o 00:02:19.825 CC lib/util/pipe.o 00:02:19.825 CC lib/util/strerror_tls.o 00:02:19.825 CC lib/util/string.o 00:02:19.825 CC lib/util/uuid.o 00:02:19.825 CC lib/util/fd_group.o 00:02:19.825 CC lib/util/xor.o 00:02:19.825 CC lib/util/zipf.o 00:02:19.825 CC lib/vfio_user/host/vfio_user_pci.o 00:02:19.825 CC lib/vfio_user/host/vfio_user.o 00:02:19.825 LIB libspdk_dma.a 00:02:20.083 SO libspdk_dma.so.4.0 00:02:20.083 SYMLINK libspdk_dma.so 00:02:20.083 LIB libspdk_ioat.a 00:02:20.083 SO libspdk_ioat.so.7.0 00:02:20.083 LIB libspdk_vfio_user.a 00:02:20.083 SYMLINK libspdk_ioat.so 00:02:20.083 SO libspdk_vfio_user.so.5.0 00:02:20.341 SYMLINK libspdk_vfio_user.so 00:02:20.341 LIB libspdk_util.a 00:02:20.599 SO libspdk_util.so.9.1 00:02:20.599 SYMLINK libspdk_util.so 00:02:20.857 LIB libspdk_trace_parser.a 00:02:20.857 CC lib/idxd/idxd.o 00:02:20.857 CC lib/rdma_utils/rdma_utils.o 00:02:20.857 CC lib/conf/conf.o 00:02:20.857 CC lib/json/json_parse.o 00:02:20.857 CC lib/vmd/vmd.o 00:02:20.857 CC lib/idxd/idxd_user.o 00:02:20.857 CC lib/json/json_util.o 00:02:20.857 CC lib/vmd/led.o 00:02:20.857 CC lib/idxd/idxd_kernel.o 00:02:20.857 CC lib/json/json_write.o 00:02:20.857 CC lib/env_dpdk/env.o 00:02:20.857 CC lib/rdma_provider/common.o 00:02:20.857 CC lib/env_dpdk/memory.o 00:02:20.857 CC lib/env_dpdk/pci.o 00:02:20.857 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:20.857 CC lib/env_dpdk/init.o 00:02:20.857 CC lib/env_dpdk/threads.o 00:02:20.857 CC lib/env_dpdk/pci_ioat.o 00:02:20.857 CC lib/env_dpdk/pci_virtio.o 00:02:20.857 CC lib/env_dpdk/pci_vmd.o 00:02:20.857 CC lib/env_dpdk/pci_idxd.o 00:02:20.857 CC lib/env_dpdk/pci_event.o 00:02:20.857 CC lib/env_dpdk/sigbus_handler.o 00:02:20.857 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:20.857 CC lib/env_dpdk/pci_dpdk.o 00:02:20.857 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:20.857 SO libspdk_trace_parser.so.5.0 00:02:21.116 SYMLINK libspdk_trace_parser.so 00:02:21.116 LIB libspdk_rdma_provider.a 00:02:21.116 SO libspdk_rdma_provider.so.6.0 00:02:21.116 LIB libspdk_conf.a 00:02:21.116 SO libspdk_conf.so.6.0 00:02:21.116 SYMLINK libspdk_rdma_provider.so 00:02:21.116 LIB libspdk_rdma_utils.a 00:02:21.116 SO libspdk_rdma_utils.so.1.0 00:02:21.116 SYMLINK libspdk_conf.so 00:02:21.116 LIB libspdk_json.a 00:02:21.116 SO libspdk_json.so.6.0 00:02:21.116 SYMLINK libspdk_rdma_utils.so 00:02:21.375 SYMLINK libspdk_json.so 00:02:21.375 CC lib/jsonrpc/jsonrpc_server.o 00:02:21.375 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:21.375 CC lib/jsonrpc/jsonrpc_client.o 00:02:21.375 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:21.633 LIB libspdk_idxd.a 00:02:21.633 LIB libspdk_vmd.a 00:02:21.633 SO libspdk_idxd.so.12.0 00:02:21.633 SO libspdk_vmd.so.6.0 00:02:21.633 LIB libspdk_jsonrpc.a 00:02:21.633 SO libspdk_jsonrpc.so.6.0 00:02:21.892 SYMLINK libspdk_idxd.so 00:02:21.892 SYMLINK libspdk_vmd.so 00:02:21.892 SYMLINK libspdk_jsonrpc.so 00:02:21.892 CC lib/rpc/rpc.o 00:02:22.150 LIB libspdk_rpc.a 00:02:22.150 SO libspdk_rpc.so.6.0 00:02:22.409 SYMLINK libspdk_rpc.so 00:02:22.409 CC lib/keyring/keyring.o 00:02:22.409 CC lib/trace/trace.o 00:02:22.409 CC lib/keyring/keyring_rpc.o 00:02:22.409 CC lib/notify/notify.o 00:02:22.409 CC lib/trace/trace_flags.o 00:02:22.409 CC lib/notify/notify_rpc.o 00:02:22.409 CC lib/trace/trace_rpc.o 00:02:22.667 LIB libspdk_notify.a 00:02:22.667 SO libspdk_notify.so.6.0 00:02:22.667 SYMLINK libspdk_notify.so 00:02:22.667 LIB libspdk_keyring.a 00:02:22.667 SO libspdk_keyring.so.1.0 00:02:22.667 LIB libspdk_trace.a 00:02:22.925 SO libspdk_trace.so.10.0 00:02:22.925 SYMLINK libspdk_keyring.so 00:02:22.925 SYMLINK libspdk_trace.so 00:02:22.925 CC lib/sock/sock.o 00:02:22.925 CC lib/sock/sock_rpc.o 00:02:22.925 CC lib/thread/thread.o 00:02:22.925 CC lib/thread/iobuf.o 00:02:23.512 LIB libspdk_sock.a 00:02:23.512 SO libspdk_sock.so.10.0 00:02:23.512 SYMLINK libspdk_sock.so 00:02:23.786 LIB libspdk_env_dpdk.a 00:02:23.786 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:23.786 CC lib/nvme/nvme_ctrlr.o 00:02:23.786 CC lib/nvme/nvme_fabric.o 00:02:23.786 CC lib/nvme/nvme_ns_cmd.o 00:02:23.786 CC lib/nvme/nvme_ns.o 00:02:23.786 CC lib/nvme/nvme_pcie_common.o 00:02:23.786 CC lib/nvme/nvme_pcie.o 00:02:23.786 CC lib/nvme/nvme_qpair.o 00:02:23.786 CC lib/nvme/nvme.o 00:02:23.786 CC lib/nvme/nvme_quirks.o 00:02:23.786 CC lib/nvme/nvme_transport.o 00:02:23.786 CC lib/nvme/nvme_discovery.o 00:02:23.786 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:23.786 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:23.786 CC lib/nvme/nvme_tcp.o 00:02:23.786 CC lib/nvme/nvme_opal.o 00:02:23.786 CC lib/nvme/nvme_io_msg.o 00:02:23.786 CC lib/nvme/nvme_poll_group.o 00:02:23.786 CC lib/nvme/nvme_zns.o 00:02:23.786 CC lib/nvme/nvme_stubs.o 00:02:23.786 CC lib/nvme/nvme_auth.o 00:02:23.786 CC lib/nvme/nvme_cuse.o 00:02:23.786 CC lib/nvme/nvme_rdma.o 00:02:23.786 SO libspdk_env_dpdk.so.14.1 00:02:24.044 SYMLINK libspdk_env_dpdk.so 00:02:25.415 LIB libspdk_thread.a 00:02:25.415 SO libspdk_thread.so.10.1 00:02:25.415 SYMLINK libspdk_thread.so 00:02:25.673 CC lib/virtio/virtio.o 00:02:25.673 CC lib/init/json_config.o 00:02:25.673 CC lib/accel/accel.o 00:02:25.673 CC lib/blob/blobstore.o 00:02:25.673 CC lib/init/subsystem.o 00:02:25.673 CC lib/virtio/virtio_vhost_user.o 00:02:25.673 CC lib/accel/accel_rpc.o 00:02:25.673 CC lib/blob/request.o 00:02:25.673 CC lib/init/subsystem_rpc.o 00:02:25.673 CC lib/virtio/virtio_vfio_user.o 00:02:25.673 CC lib/accel/accel_sw.o 00:02:25.673 CC lib/blob/zeroes.o 00:02:25.673 CC lib/init/rpc.o 00:02:25.673 CC lib/virtio/virtio_pci.o 00:02:25.673 CC lib/blob/blob_bs_dev.o 00:02:25.932 LIB libspdk_init.a 00:02:25.932 SO libspdk_init.so.5.0 00:02:26.191 SYMLINK libspdk_init.so 00:02:26.191 LIB libspdk_virtio.a 00:02:26.191 SO libspdk_virtio.so.7.0 00:02:26.191 SYMLINK libspdk_virtio.so 00:02:26.191 CC lib/event/app.o 00:02:26.191 CC lib/event/reactor.o 00:02:26.191 CC lib/event/log_rpc.o 00:02:26.191 CC lib/event/app_rpc.o 00:02:26.191 CC lib/event/scheduler_static.o 00:02:26.758 LIB libspdk_nvme.a 00:02:26.758 SO libspdk_nvme.so.13.1 00:02:26.758 LIB libspdk_event.a 00:02:26.758 SO libspdk_event.so.14.0 00:02:27.016 SYMLINK libspdk_event.so 00:02:27.016 LIB libspdk_accel.a 00:02:27.016 SO libspdk_accel.so.15.1 00:02:27.016 SYMLINK libspdk_nvme.so 00:02:27.016 SYMLINK libspdk_accel.so 00:02:27.274 CC lib/bdev/bdev.o 00:02:27.274 CC lib/bdev/bdev_rpc.o 00:02:27.274 CC lib/bdev/bdev_zone.o 00:02:27.274 CC lib/bdev/part.o 00:02:27.274 CC lib/bdev/scsi_nvme.o 00:02:29.803 LIB libspdk_blob.a 00:02:29.803 SO libspdk_blob.so.11.0 00:02:29.803 SYMLINK libspdk_blob.so 00:02:30.061 CC lib/lvol/lvol.o 00:02:30.061 CC lib/blobfs/blobfs.o 00:02:30.061 CC lib/blobfs/tree.o 00:02:30.627 LIB libspdk_bdev.a 00:02:30.627 SO libspdk_bdev.so.15.1 00:02:30.627 SYMLINK libspdk_bdev.so 00:02:30.892 CC lib/nbd/nbd.o 00:02:30.892 CC lib/ublk/ublk.o 00:02:30.892 CC lib/nvmf/ctrlr.o 00:02:30.892 CC lib/ftl/ftl_core.o 00:02:30.892 CC lib/nbd/nbd_rpc.o 00:02:30.892 CC lib/ublk/ublk_rpc.o 00:02:30.892 CC lib/ftl/ftl_init.o 00:02:30.892 CC lib/nvmf/ctrlr_discovery.o 00:02:30.892 CC lib/ftl/ftl_layout.o 00:02:30.892 CC lib/scsi/dev.o 00:02:30.892 CC lib/nvmf/ctrlr_bdev.o 00:02:30.892 CC lib/ftl/ftl_debug.o 00:02:30.892 CC lib/nvmf/subsystem.o 00:02:30.892 CC lib/scsi/lun.o 00:02:30.892 CC lib/ftl/ftl_io.o 00:02:30.892 CC lib/nvmf/nvmf.o 00:02:30.892 CC lib/scsi/port.o 00:02:30.892 CC lib/nvmf/nvmf_rpc.o 00:02:30.892 CC lib/scsi/scsi.o 00:02:30.892 CC lib/ftl/ftl_sb.o 00:02:30.892 CC lib/nvmf/transport.o 00:02:30.892 CC lib/scsi/scsi_bdev.o 00:02:30.892 CC lib/nvmf/tcp.o 00:02:30.892 CC lib/ftl/ftl_l2p.o 00:02:30.892 CC lib/scsi/scsi_pr.o 00:02:30.892 CC lib/ftl/ftl_l2p_flat.o 00:02:30.892 CC lib/nvmf/stubs.o 00:02:30.892 CC lib/scsi/scsi_rpc.o 00:02:30.892 CC lib/nvmf/mdns_server.o 00:02:30.892 CC lib/ftl/ftl_nv_cache.o 00:02:30.892 CC lib/scsi/task.o 00:02:30.892 CC lib/ftl/ftl_band.o 00:02:30.892 CC lib/nvmf/rdma.o 00:02:30.892 CC lib/ftl/ftl_band_ops.o 00:02:30.892 CC lib/nvmf/auth.o 00:02:30.892 CC lib/ftl/ftl_writer.o 00:02:30.892 CC lib/ftl/ftl_rq.o 00:02:30.892 CC lib/ftl/ftl_reloc.o 00:02:30.892 CC lib/ftl/ftl_l2p_cache.o 00:02:30.892 CC lib/ftl/ftl_p2l.o 00:02:30.892 CC lib/ftl/mngt/ftl_mngt.o 00:02:30.892 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:30.892 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:30.892 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:30.892 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:30.892 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:31.151 LIB libspdk_blobfs.a 00:02:31.151 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:31.151 SO libspdk_blobfs.so.10.0 00:02:31.151 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:31.416 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:31.416 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:31.416 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:31.416 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:31.416 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:31.416 CC lib/ftl/utils/ftl_conf.o 00:02:31.416 LIB libspdk_lvol.a 00:02:31.416 CC lib/ftl/utils/ftl_md.o 00:02:31.416 SO libspdk_lvol.so.10.0 00:02:31.416 SYMLINK libspdk_blobfs.so 00:02:31.416 CC lib/ftl/utils/ftl_mempool.o 00:02:31.416 CC lib/ftl/utils/ftl_bitmap.o 00:02:31.416 CC lib/ftl/utils/ftl_property.o 00:02:31.416 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:31.416 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:31.416 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:31.416 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:31.416 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:31.416 SYMLINK libspdk_lvol.so 00:02:31.416 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:31.416 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:31.416 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:31.416 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:31.674 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:31.674 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:31.674 CC lib/ftl/base/ftl_base_dev.o 00:02:31.675 CC lib/ftl/base/ftl_base_bdev.o 00:02:31.675 CC lib/ftl/ftl_trace.o 00:02:31.933 LIB libspdk_nbd.a 00:02:31.933 SO libspdk_nbd.so.7.0 00:02:31.933 SYMLINK libspdk_nbd.so 00:02:32.191 LIB libspdk_scsi.a 00:02:32.191 SO libspdk_scsi.so.9.0 00:02:32.191 LIB libspdk_ublk.a 00:02:32.191 SO libspdk_ublk.so.3.0 00:02:32.191 SYMLINK libspdk_scsi.so 00:02:32.191 SYMLINK libspdk_ublk.so 00:02:32.449 CC lib/vhost/vhost.o 00:02:32.449 CC lib/iscsi/conn.o 00:02:32.449 CC lib/iscsi/init_grp.o 00:02:32.449 CC lib/vhost/vhost_rpc.o 00:02:32.449 CC lib/vhost/vhost_scsi.o 00:02:32.449 CC lib/iscsi/iscsi.o 00:02:32.449 CC lib/vhost/vhost_blk.o 00:02:32.449 CC lib/iscsi/md5.o 00:02:32.449 CC lib/vhost/rte_vhost_user.o 00:02:32.449 CC lib/iscsi/param.o 00:02:32.449 CC lib/iscsi/portal_grp.o 00:02:32.449 CC lib/iscsi/tgt_node.o 00:02:32.449 CC lib/iscsi/iscsi_subsystem.o 00:02:32.449 CC lib/iscsi/iscsi_rpc.o 00:02:32.449 CC lib/iscsi/task.o 00:02:32.707 LIB libspdk_ftl.a 00:02:32.965 SO libspdk_ftl.so.9.0 00:02:33.223 SYMLINK libspdk_ftl.so 00:02:33.789 LIB libspdk_vhost.a 00:02:33.789 SO libspdk_vhost.so.8.0 00:02:34.046 SYMLINK libspdk_vhost.so 00:02:34.304 LIB libspdk_nvmf.a 00:02:34.304 LIB libspdk_iscsi.a 00:02:34.304 SO libspdk_iscsi.so.8.0 00:02:34.304 SO libspdk_nvmf.so.18.1 00:02:34.561 SYMLINK libspdk_iscsi.so 00:02:34.561 SYMLINK libspdk_nvmf.so 00:02:34.821 CC module/env_dpdk/env_dpdk_rpc.o 00:02:34.821 CC module/accel/dsa/accel_dsa.o 00:02:35.079 CC module/scheduler/gscheduler/gscheduler.o 00:02:35.079 CC module/keyring/file/keyring.o 00:02:35.079 CC module/blob/bdev/blob_bdev.o 00:02:35.079 CC module/accel/dsa/accel_dsa_rpc.o 00:02:35.079 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:35.079 CC module/accel/ioat/accel_ioat.o 00:02:35.079 CC module/keyring/file/keyring_rpc.o 00:02:35.079 CC module/accel/ioat/accel_ioat_rpc.o 00:02:35.079 CC module/accel/iaa/accel_iaa.o 00:02:35.079 CC module/keyring/linux/keyring.o 00:02:35.079 CC module/accel/iaa/accel_iaa_rpc.o 00:02:35.079 CC module/keyring/linux/keyring_rpc.o 00:02:35.079 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:35.079 CC module/sock/posix/posix.o 00:02:35.079 CC module/accel/error/accel_error.o 00:02:35.079 CC module/accel/error/accel_error_rpc.o 00:02:35.079 LIB libspdk_env_dpdk_rpc.a 00:02:35.079 SO libspdk_env_dpdk_rpc.so.6.0 00:02:35.079 SYMLINK libspdk_env_dpdk_rpc.so 00:02:35.079 LIB libspdk_keyring_file.a 00:02:35.079 LIB libspdk_keyring_linux.a 00:02:35.079 LIB libspdk_scheduler_gscheduler.a 00:02:35.079 LIB libspdk_scheduler_dpdk_governor.a 00:02:35.079 SO libspdk_keyring_file.so.1.0 00:02:35.079 SO libspdk_keyring_linux.so.1.0 00:02:35.079 SO libspdk_scheduler_gscheduler.so.4.0 00:02:35.079 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:35.079 LIB libspdk_accel_error.a 00:02:35.079 LIB libspdk_accel_ioat.a 00:02:35.079 LIB libspdk_scheduler_dynamic.a 00:02:35.336 LIB libspdk_accel_iaa.a 00:02:35.336 SO libspdk_accel_error.so.2.0 00:02:35.336 SYMLINK libspdk_scheduler_gscheduler.so 00:02:35.336 SO libspdk_accel_ioat.so.6.0 00:02:35.336 SYMLINK libspdk_keyring_linux.so 00:02:35.336 SYMLINK libspdk_keyring_file.so 00:02:35.336 SO libspdk_scheduler_dynamic.so.4.0 00:02:35.336 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:35.336 SO libspdk_accel_iaa.so.3.0 00:02:35.336 SYMLINK libspdk_accel_error.so 00:02:35.336 SYMLINK libspdk_scheduler_dynamic.so 00:02:35.336 SYMLINK libspdk_accel_ioat.so 00:02:35.336 LIB libspdk_accel_dsa.a 00:02:35.336 LIB libspdk_blob_bdev.a 00:02:35.336 SYMLINK libspdk_accel_iaa.so 00:02:35.336 SO libspdk_accel_dsa.so.5.0 00:02:35.336 SO libspdk_blob_bdev.so.11.0 00:02:35.336 SYMLINK libspdk_blob_bdev.so 00:02:35.336 SYMLINK libspdk_accel_dsa.so 00:02:35.594 CC module/bdev/error/vbdev_error.o 00:02:35.594 CC module/bdev/gpt/gpt.o 00:02:35.594 CC module/bdev/error/vbdev_error_rpc.o 00:02:35.594 CC module/bdev/gpt/vbdev_gpt.o 00:02:35.594 CC module/bdev/delay/vbdev_delay.o 00:02:35.594 CC module/bdev/malloc/bdev_malloc.o 00:02:35.594 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:35.594 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:35.594 CC module/blobfs/bdev/blobfs_bdev.o 00:02:35.594 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:35.594 CC module/bdev/raid/bdev_raid.o 00:02:35.594 CC module/bdev/passthru/vbdev_passthru.o 00:02:35.594 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:35.594 CC module/bdev/lvol/vbdev_lvol.o 00:02:35.594 CC module/bdev/raid/bdev_raid_rpc.o 00:02:35.594 CC module/bdev/split/vbdev_split.o 00:02:35.594 CC module/bdev/null/bdev_null_rpc.o 00:02:35.594 CC module/bdev/null/bdev_null.o 00:02:35.594 CC module/bdev/split/vbdev_split_rpc.o 00:02:35.594 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:35.594 CC module/bdev/raid/bdev_raid_sb.o 00:02:35.594 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:35.594 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:35.594 CC module/bdev/raid/raid0.o 00:02:35.594 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:35.594 CC module/bdev/raid/raid1.o 00:02:35.594 CC module/bdev/ftl/bdev_ftl.o 00:02:35.594 CC module/bdev/aio/bdev_aio.o 00:02:35.594 CC module/bdev/raid/concat.o 00:02:35.594 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:35.594 CC module/bdev/aio/bdev_aio_rpc.o 00:02:35.594 CC module/bdev/iscsi/bdev_iscsi.o 00:02:35.594 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:35.594 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:35.594 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:35.594 CC module/bdev/nvme/bdev_nvme.o 00:02:35.594 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:35.594 CC module/bdev/nvme/nvme_rpc.o 00:02:35.594 CC module/bdev/nvme/bdev_mdns_client.o 00:02:35.594 CC module/bdev/nvme/vbdev_opal.o 00:02:35.594 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:35.594 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:36.160 LIB libspdk_bdev_null.a 00:02:36.160 LIB libspdk_blobfs_bdev.a 00:02:36.160 SO libspdk_bdev_null.so.6.0 00:02:36.160 SO libspdk_blobfs_bdev.so.6.0 00:02:36.160 SYMLINK libspdk_bdev_null.so 00:02:36.160 LIB libspdk_bdev_split.a 00:02:36.160 SYMLINK libspdk_blobfs_bdev.so 00:02:36.160 LIB libspdk_bdev_error.a 00:02:36.160 SO libspdk_bdev_split.so.6.0 00:02:36.160 LIB libspdk_bdev_gpt.a 00:02:36.160 LIB libspdk_sock_posix.a 00:02:36.160 SO libspdk_bdev_gpt.so.6.0 00:02:36.160 SO libspdk_bdev_error.so.6.0 00:02:36.160 SO libspdk_sock_posix.so.6.0 00:02:36.160 LIB libspdk_bdev_iscsi.a 00:02:36.160 SYMLINK libspdk_bdev_split.so 00:02:36.160 SO libspdk_bdev_iscsi.so.6.0 00:02:36.160 LIB libspdk_bdev_ftl.a 00:02:36.160 SYMLINK libspdk_bdev_gpt.so 00:02:36.160 SYMLINK libspdk_bdev_error.so 00:02:36.160 LIB libspdk_bdev_passthru.a 00:02:36.160 LIB libspdk_bdev_aio.a 00:02:36.160 SO libspdk_bdev_ftl.so.6.0 00:02:36.160 SO libspdk_bdev_passthru.so.6.0 00:02:36.160 SO libspdk_bdev_aio.so.6.0 00:02:36.160 SYMLINK libspdk_sock_posix.so 00:02:36.160 LIB libspdk_bdev_delay.a 00:02:36.160 SYMLINK libspdk_bdev_iscsi.so 00:02:36.417 SO libspdk_bdev_delay.so.6.0 00:02:36.417 LIB libspdk_bdev_zone_block.a 00:02:36.417 SYMLINK libspdk_bdev_ftl.so 00:02:36.417 LIB libspdk_bdev_malloc.a 00:02:36.417 SYMLINK libspdk_bdev_passthru.so 00:02:36.417 SYMLINK libspdk_bdev_aio.so 00:02:36.417 SO libspdk_bdev_zone_block.so.6.0 00:02:36.417 SO libspdk_bdev_malloc.so.6.0 00:02:36.417 LIB libspdk_bdev_lvol.a 00:02:36.417 SYMLINK libspdk_bdev_delay.so 00:02:36.417 SO libspdk_bdev_lvol.so.6.0 00:02:36.417 SYMLINK libspdk_bdev_zone_block.so 00:02:36.417 SYMLINK libspdk_bdev_malloc.so 00:02:36.417 SYMLINK libspdk_bdev_lvol.so 00:02:36.675 LIB libspdk_bdev_virtio.a 00:02:36.675 SO libspdk_bdev_virtio.so.6.0 00:02:36.675 SYMLINK libspdk_bdev_virtio.so 00:02:36.933 LIB libspdk_bdev_raid.a 00:02:37.192 SO libspdk_bdev_raid.so.6.0 00:02:37.192 SYMLINK libspdk_bdev_raid.so 00:02:38.627 LIB libspdk_bdev_nvme.a 00:02:38.627 SO libspdk_bdev_nvme.so.7.0 00:02:38.627 SYMLINK libspdk_bdev_nvme.so 00:02:39.240 CC module/event/subsystems/iobuf/iobuf.o 00:02:39.240 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:39.240 CC module/event/subsystems/keyring/keyring.o 00:02:39.240 CC module/event/subsystems/vmd/vmd.o 00:02:39.240 CC module/event/subsystems/scheduler/scheduler.o 00:02:39.240 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:39.240 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:39.240 CC module/event/subsystems/sock/sock.o 00:02:39.240 LIB libspdk_event_keyring.a 00:02:39.240 LIB libspdk_event_vhost_blk.a 00:02:39.240 LIB libspdk_event_sock.a 00:02:39.240 LIB libspdk_event_scheduler.a 00:02:39.240 LIB libspdk_event_vmd.a 00:02:39.240 SO libspdk_event_keyring.so.1.0 00:02:39.240 LIB libspdk_event_iobuf.a 00:02:39.240 SO libspdk_event_vhost_blk.so.3.0 00:02:39.240 SO libspdk_event_sock.so.5.0 00:02:39.240 SO libspdk_event_scheduler.so.4.0 00:02:39.240 SO libspdk_event_vmd.so.6.0 00:02:39.240 SO libspdk_event_iobuf.so.3.0 00:02:39.240 SYMLINK libspdk_event_keyring.so 00:02:39.240 SYMLINK libspdk_event_vhost_blk.so 00:02:39.240 SYMLINK libspdk_event_sock.so 00:02:39.240 SYMLINK libspdk_event_scheduler.so 00:02:39.240 SYMLINK libspdk_event_vmd.so 00:02:39.240 SYMLINK libspdk_event_iobuf.so 00:02:39.498 CC module/event/subsystems/accel/accel.o 00:02:39.756 LIB libspdk_event_accel.a 00:02:39.756 SO libspdk_event_accel.so.6.0 00:02:39.756 SYMLINK libspdk_event_accel.so 00:02:40.013 CC module/event/subsystems/bdev/bdev.o 00:02:40.013 LIB libspdk_event_bdev.a 00:02:40.013 SO libspdk_event_bdev.so.6.0 00:02:40.270 SYMLINK libspdk_event_bdev.so 00:02:40.270 CC module/event/subsystems/nbd/nbd.o 00:02:40.270 CC module/event/subsystems/scsi/scsi.o 00:02:40.270 CC module/event/subsystems/ublk/ublk.o 00:02:40.270 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:40.270 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:40.528 LIB libspdk_event_nbd.a 00:02:40.528 LIB libspdk_event_ublk.a 00:02:40.528 SO libspdk_event_nbd.so.6.0 00:02:40.528 LIB libspdk_event_scsi.a 00:02:40.528 SO libspdk_event_ublk.so.3.0 00:02:40.528 SO libspdk_event_scsi.so.6.0 00:02:40.528 SYMLINK libspdk_event_nbd.so 00:02:40.528 SYMLINK libspdk_event_ublk.so 00:02:40.528 SYMLINK libspdk_event_scsi.so 00:02:40.528 LIB libspdk_event_nvmf.a 00:02:40.528 SO libspdk_event_nvmf.so.6.0 00:02:40.786 SYMLINK libspdk_event_nvmf.so 00:02:40.786 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:40.786 CC module/event/subsystems/iscsi/iscsi.o 00:02:40.786 LIB libspdk_event_vhost_scsi.a 00:02:40.786 LIB libspdk_event_iscsi.a 00:02:41.044 SO libspdk_event_vhost_scsi.so.3.0 00:02:41.044 SO libspdk_event_iscsi.so.6.0 00:02:41.044 SYMLINK libspdk_event_vhost_scsi.so 00:02:41.044 SYMLINK libspdk_event_iscsi.so 00:02:41.044 SO libspdk.so.6.0 00:02:41.044 SYMLINK libspdk.so 00:02:41.305 CC app/trace_record/trace_record.o 00:02:41.305 CXX app/trace/trace.o 00:02:41.305 TEST_HEADER include/spdk/accel.h 00:02:41.305 CC test/rpc_client/rpc_client_test.o 00:02:41.305 TEST_HEADER include/spdk/assert.h 00:02:41.305 TEST_HEADER include/spdk/accel_module.h 00:02:41.305 TEST_HEADER include/spdk/barrier.h 00:02:41.305 TEST_HEADER include/spdk/base64.h 00:02:41.305 TEST_HEADER include/spdk/bdev.h 00:02:41.305 TEST_HEADER include/spdk/bdev_module.h 00:02:41.305 CC app/spdk_nvme_perf/perf.o 00:02:41.305 TEST_HEADER include/spdk/bdev_zone.h 00:02:41.305 CC app/spdk_top/spdk_top.o 00:02:41.305 CC app/spdk_lspci/spdk_lspci.o 00:02:41.305 TEST_HEADER include/spdk/bit_array.h 00:02:41.305 TEST_HEADER include/spdk/bit_pool.h 00:02:41.305 CC app/spdk_nvme_identify/identify.o 00:02:41.305 TEST_HEADER include/spdk/blob_bdev.h 00:02:41.305 CC app/spdk_nvme_discover/discovery_aer.o 00:02:41.305 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:41.305 TEST_HEADER include/spdk/blobfs.h 00:02:41.305 TEST_HEADER include/spdk/blob.h 00:02:41.305 TEST_HEADER include/spdk/conf.h 00:02:41.305 TEST_HEADER include/spdk/config.h 00:02:41.305 TEST_HEADER include/spdk/cpuset.h 00:02:41.305 TEST_HEADER include/spdk/crc16.h 00:02:41.305 TEST_HEADER include/spdk/crc32.h 00:02:41.305 TEST_HEADER include/spdk/crc64.h 00:02:41.305 TEST_HEADER include/spdk/dif.h 00:02:41.305 TEST_HEADER include/spdk/dma.h 00:02:41.305 TEST_HEADER include/spdk/endian.h 00:02:41.305 TEST_HEADER include/spdk/env_dpdk.h 00:02:41.305 TEST_HEADER include/spdk/env.h 00:02:41.305 TEST_HEADER include/spdk/event.h 00:02:41.305 TEST_HEADER include/spdk/fd_group.h 00:02:41.305 TEST_HEADER include/spdk/file.h 00:02:41.305 TEST_HEADER include/spdk/fd.h 00:02:41.305 TEST_HEADER include/spdk/ftl.h 00:02:41.305 TEST_HEADER include/spdk/gpt_spec.h 00:02:41.305 TEST_HEADER include/spdk/hexlify.h 00:02:41.305 TEST_HEADER include/spdk/histogram_data.h 00:02:41.305 TEST_HEADER include/spdk/idxd.h 00:02:41.305 TEST_HEADER include/spdk/idxd_spec.h 00:02:41.305 TEST_HEADER include/spdk/init.h 00:02:41.305 TEST_HEADER include/spdk/ioat.h 00:02:41.305 TEST_HEADER include/spdk/ioat_spec.h 00:02:41.305 TEST_HEADER include/spdk/iscsi_spec.h 00:02:41.305 TEST_HEADER include/spdk/json.h 00:02:41.305 TEST_HEADER include/spdk/jsonrpc.h 00:02:41.305 TEST_HEADER include/spdk/keyring.h 00:02:41.305 TEST_HEADER include/spdk/keyring_module.h 00:02:41.305 TEST_HEADER include/spdk/likely.h 00:02:41.305 TEST_HEADER include/spdk/log.h 00:02:41.305 TEST_HEADER include/spdk/memory.h 00:02:41.305 TEST_HEADER include/spdk/lvol.h 00:02:41.305 TEST_HEADER include/spdk/mmio.h 00:02:41.305 TEST_HEADER include/spdk/nbd.h 00:02:41.305 TEST_HEADER include/spdk/nvme.h 00:02:41.305 TEST_HEADER include/spdk/notify.h 00:02:41.305 TEST_HEADER include/spdk/nvme_intel.h 00:02:41.305 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:41.305 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:41.305 TEST_HEADER include/spdk/nvme_spec.h 00:02:41.305 TEST_HEADER include/spdk/nvme_zns.h 00:02:41.305 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:41.305 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:41.305 TEST_HEADER include/spdk/nvmf.h 00:02:41.305 TEST_HEADER include/spdk/nvmf_spec.h 00:02:41.305 TEST_HEADER include/spdk/nvmf_transport.h 00:02:41.305 TEST_HEADER include/spdk/opal.h 00:02:41.305 TEST_HEADER include/spdk/opal_spec.h 00:02:41.305 TEST_HEADER include/spdk/pipe.h 00:02:41.305 TEST_HEADER include/spdk/pci_ids.h 00:02:41.305 TEST_HEADER include/spdk/queue.h 00:02:41.305 TEST_HEADER include/spdk/reduce.h 00:02:41.305 TEST_HEADER include/spdk/rpc.h 00:02:41.305 TEST_HEADER include/spdk/scheduler.h 00:02:41.305 TEST_HEADER include/spdk/scsi_spec.h 00:02:41.305 TEST_HEADER include/spdk/scsi.h 00:02:41.305 TEST_HEADER include/spdk/sock.h 00:02:41.305 TEST_HEADER include/spdk/stdinc.h 00:02:41.305 TEST_HEADER include/spdk/string.h 00:02:41.305 TEST_HEADER include/spdk/thread.h 00:02:41.305 TEST_HEADER include/spdk/trace.h 00:02:41.305 TEST_HEADER include/spdk/trace_parser.h 00:02:41.305 TEST_HEADER include/spdk/tree.h 00:02:41.305 TEST_HEADER include/spdk/ublk.h 00:02:41.305 TEST_HEADER include/spdk/util.h 00:02:41.305 TEST_HEADER include/spdk/uuid.h 00:02:41.305 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:41.305 TEST_HEADER include/spdk/version.h 00:02:41.305 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:41.305 TEST_HEADER include/spdk/vhost.h 00:02:41.305 TEST_HEADER include/spdk/vmd.h 00:02:41.305 TEST_HEADER include/spdk/xor.h 00:02:41.305 TEST_HEADER include/spdk/zipf.h 00:02:41.305 CXX test/cpp_headers/accel.o 00:02:41.305 CXX test/cpp_headers/accel_module.o 00:02:41.305 CXX test/cpp_headers/assert.o 00:02:41.305 CXX test/cpp_headers/barrier.o 00:02:41.305 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:41.305 CXX test/cpp_headers/base64.o 00:02:41.305 CC app/spdk_dd/spdk_dd.o 00:02:41.305 CXX test/cpp_headers/bdev.o 00:02:41.305 CXX test/cpp_headers/bdev_module.o 00:02:41.305 CXX test/cpp_headers/bdev_zone.o 00:02:41.305 CXX test/cpp_headers/bit_array.o 00:02:41.305 CXX test/cpp_headers/bit_pool.o 00:02:41.305 CXX test/cpp_headers/blob_bdev.o 00:02:41.305 CXX test/cpp_headers/blobfs_bdev.o 00:02:41.305 CXX test/cpp_headers/blobfs.o 00:02:41.305 CXX test/cpp_headers/blob.o 00:02:41.305 CXX test/cpp_headers/conf.o 00:02:41.305 CXX test/cpp_headers/config.o 00:02:41.305 CXX test/cpp_headers/cpuset.o 00:02:41.305 CXX test/cpp_headers/crc16.o 00:02:41.305 CC app/iscsi_tgt/iscsi_tgt.o 00:02:41.305 CC app/nvmf_tgt/nvmf_main.o 00:02:41.305 CXX test/cpp_headers/crc32.o 00:02:41.305 CC examples/ioat/verify/verify.o 00:02:41.305 CC app/spdk_tgt/spdk_tgt.o 00:02:41.305 CC test/app/stub/stub.o 00:02:41.305 CC test/app/jsoncat/jsoncat.o 00:02:41.305 CC examples/ioat/perf/perf.o 00:02:41.305 CC examples/util/zipf/zipf.o 00:02:41.305 CC test/app/histogram_perf/histogram_perf.o 00:02:41.305 CC test/thread/poller_perf/poller_perf.o 00:02:41.305 CC test/env/pci/pci_ut.o 00:02:41.305 CC test/env/vtophys/vtophys.o 00:02:41.305 CC test/env/memory/memory_ut.o 00:02:41.305 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:41.567 CC app/fio/nvme/fio_plugin.o 00:02:41.567 CC test/dma/test_dma/test_dma.o 00:02:41.567 CC test/app/bdev_svc/bdev_svc.o 00:02:41.567 CC app/fio/bdev/fio_plugin.o 00:02:41.567 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:41.567 LINK spdk_lspci 00:02:41.567 CC test/env/mem_callbacks/mem_callbacks.o 00:02:41.829 LINK rpc_client_test 00:02:41.829 LINK spdk_nvme_discover 00:02:41.829 LINK jsoncat 00:02:41.829 LINK poller_perf 00:02:41.829 LINK interrupt_tgt 00:02:41.829 LINK vtophys 00:02:41.829 LINK zipf 00:02:41.829 CXX test/cpp_headers/crc64.o 00:02:41.829 LINK histogram_perf 00:02:41.829 CXX test/cpp_headers/dif.o 00:02:41.829 LINK env_dpdk_post_init 00:02:41.829 CXX test/cpp_headers/dma.o 00:02:41.829 LINK stub 00:02:41.829 LINK nvmf_tgt 00:02:41.829 CXX test/cpp_headers/endian.o 00:02:41.829 CXX test/cpp_headers/env_dpdk.o 00:02:41.829 CXX test/cpp_headers/env.o 00:02:41.829 CXX test/cpp_headers/event.o 00:02:41.829 CXX test/cpp_headers/fd_group.o 00:02:41.829 CXX test/cpp_headers/fd.o 00:02:41.829 LINK iscsi_tgt 00:02:41.829 CXX test/cpp_headers/file.o 00:02:41.829 CXX test/cpp_headers/ftl.o 00:02:41.829 CXX test/cpp_headers/gpt_spec.o 00:02:41.829 CXX test/cpp_headers/hexlify.o 00:02:41.829 LINK spdk_tgt 00:02:41.829 CXX test/cpp_headers/histogram_data.o 00:02:41.829 CXX test/cpp_headers/idxd.o 00:02:41.829 LINK bdev_svc 00:02:41.829 CXX test/cpp_headers/idxd_spec.o 00:02:41.829 LINK spdk_trace_record 00:02:41.829 LINK ioat_perf 00:02:41.829 LINK verify 00:02:41.829 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:42.090 CXX test/cpp_headers/init.o 00:02:42.090 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:42.090 CXX test/cpp_headers/ioat.o 00:02:42.090 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:42.090 CXX test/cpp_headers/ioat_spec.o 00:02:42.090 CXX test/cpp_headers/iscsi_spec.o 00:02:42.090 CXX test/cpp_headers/json.o 00:02:42.090 CXX test/cpp_headers/jsonrpc.o 00:02:42.090 CXX test/cpp_headers/keyring.o 00:02:42.090 LINK spdk_dd 00:02:42.090 CXX test/cpp_headers/keyring_module.o 00:02:42.090 CXX test/cpp_headers/likely.o 00:02:42.361 CXX test/cpp_headers/log.o 00:02:42.361 CXX test/cpp_headers/lvol.o 00:02:42.361 CXX test/cpp_headers/memory.o 00:02:42.361 CXX test/cpp_headers/mmio.o 00:02:42.361 CXX test/cpp_headers/nbd.o 00:02:42.361 CXX test/cpp_headers/notify.o 00:02:42.361 CXX test/cpp_headers/nvme.o 00:02:42.361 LINK spdk_trace 00:02:42.361 CXX test/cpp_headers/nvme_intel.o 00:02:42.361 CXX test/cpp_headers/nvme_ocssd.o 00:02:42.361 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:42.361 CXX test/cpp_headers/nvme_spec.o 00:02:42.361 CXX test/cpp_headers/nvme_zns.o 00:02:42.361 CXX test/cpp_headers/nvmf_cmd.o 00:02:42.361 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:42.361 CXX test/cpp_headers/nvmf.o 00:02:42.361 CXX test/cpp_headers/nvmf_spec.o 00:02:42.361 CXX test/cpp_headers/nvmf_transport.o 00:02:42.361 CXX test/cpp_headers/opal.o 00:02:42.361 LINK test_dma 00:02:42.361 LINK pci_ut 00:02:42.361 CC test/event/event_perf/event_perf.o 00:02:42.361 CC test/event/reactor/reactor.o 00:02:42.361 CC test/event/reactor_perf/reactor_perf.o 00:02:42.361 CXX test/cpp_headers/opal_spec.o 00:02:42.361 CXX test/cpp_headers/pci_ids.o 00:02:42.361 CC examples/sock/hello_world/hello_sock.o 00:02:42.361 CC test/event/app_repeat/app_repeat.o 00:02:42.361 CC test/event/scheduler/scheduler.o 00:02:42.624 CXX test/cpp_headers/pipe.o 00:02:42.624 CC examples/thread/thread/thread_ex.o 00:02:42.624 CXX test/cpp_headers/queue.o 00:02:42.624 CXX test/cpp_headers/reduce.o 00:02:42.624 CC examples/vmd/lsvmd/lsvmd.o 00:02:42.624 CC examples/idxd/perf/perf.o 00:02:42.624 LINK nvme_fuzz 00:02:42.624 CXX test/cpp_headers/rpc.o 00:02:42.624 CXX test/cpp_headers/scheduler.o 00:02:42.624 CXX test/cpp_headers/scsi.o 00:02:42.624 CXX test/cpp_headers/scsi_spec.o 00:02:42.624 CXX test/cpp_headers/sock.o 00:02:42.624 CXX test/cpp_headers/stdinc.o 00:02:42.624 CXX test/cpp_headers/string.o 00:02:42.624 CC examples/vmd/led/led.o 00:02:42.624 CXX test/cpp_headers/thread.o 00:02:42.624 CXX test/cpp_headers/trace.o 00:02:42.624 CXX test/cpp_headers/trace_parser.o 00:02:42.624 LINK spdk_bdev 00:02:42.624 CXX test/cpp_headers/tree.o 00:02:42.624 LINK reactor 00:02:42.624 CXX test/cpp_headers/ublk.o 00:02:42.624 LINK reactor_perf 00:02:42.624 LINK event_perf 00:02:42.884 CXX test/cpp_headers/util.o 00:02:42.884 CXX test/cpp_headers/uuid.o 00:02:42.884 CXX test/cpp_headers/version.o 00:02:42.884 CXX test/cpp_headers/vfio_user_pci.o 00:02:42.884 CXX test/cpp_headers/vfio_user_spec.o 00:02:42.884 CXX test/cpp_headers/vhost.o 00:02:42.884 LINK app_repeat 00:02:42.884 CXX test/cpp_headers/vmd.o 00:02:42.884 CXX test/cpp_headers/xor.o 00:02:42.884 CXX test/cpp_headers/zipf.o 00:02:42.884 LINK lsvmd 00:02:42.884 LINK mem_callbacks 00:02:42.884 CC app/vhost/vhost.o 00:02:42.884 LINK spdk_nvme 00:02:42.884 LINK scheduler 00:02:42.884 LINK led 00:02:43.146 LINK thread 00:02:43.146 LINK hello_sock 00:02:43.146 LINK vhost_fuzz 00:02:43.146 CC test/nvme/err_injection/err_injection.o 00:02:43.146 CC test/nvme/aer/aer.o 00:02:43.146 CC test/nvme/reset/reset.o 00:02:43.146 CC test/nvme/e2edp/nvme_dp.o 00:02:43.146 CC test/nvme/reserve/reserve.o 00:02:43.146 CC test/nvme/overhead/overhead.o 00:02:43.146 CC test/nvme/sgl/sgl.o 00:02:43.146 CC test/nvme/startup/startup.o 00:02:43.146 CC test/accel/dif/dif.o 00:02:43.146 CC test/nvme/simple_copy/simple_copy.o 00:02:43.146 CC test/nvme/connect_stress/connect_stress.o 00:02:43.146 CC test/blobfs/mkfs/mkfs.o 00:02:43.146 CC test/nvme/boot_partition/boot_partition.o 00:02:43.146 CC test/nvme/compliance/nvme_compliance.o 00:02:43.146 CC test/nvme/fused_ordering/fused_ordering.o 00:02:43.146 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:43.146 CC test/nvme/cuse/cuse.o 00:02:43.146 CC test/nvme/fdp/fdp.o 00:02:43.146 CC test/lvol/esnap/esnap.o 00:02:43.146 LINK vhost 00:02:43.405 LINK idxd_perf 00:02:43.405 LINK spdk_nvme_perf 00:02:43.405 LINK spdk_nvme_identify 00:02:43.405 LINK boot_partition 00:02:43.405 LINK mkfs 00:02:43.405 LINK fused_ordering 00:02:43.405 CC examples/nvme/hello_world/hello_world.o 00:02:43.405 CC examples/nvme/reconnect/reconnect.o 00:02:43.405 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:43.405 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:43.405 CC examples/nvme/arbitration/arbitration.o 00:02:43.405 CC examples/nvme/abort/abort.o 00:02:43.405 CC examples/nvme/hotplug/hotplug.o 00:02:43.405 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:43.405 LINK spdk_top 00:02:43.405 LINK startup 00:02:43.405 LINK connect_stress 00:02:43.405 LINK doorbell_aers 00:02:43.405 LINK err_injection 00:02:43.405 LINK nvme_dp 00:02:43.663 LINK sgl 00:02:43.663 LINK reserve 00:02:43.663 LINK aer 00:02:43.663 LINK overhead 00:02:43.663 CC examples/accel/perf/accel_perf.o 00:02:43.663 LINK reset 00:02:43.663 LINK simple_copy 00:02:43.663 LINK nvme_compliance 00:02:43.663 CC examples/blob/cli/blobcli.o 00:02:43.663 CC examples/blob/hello_world/hello_blob.o 00:02:43.663 LINK pmr_persistence 00:02:43.663 LINK fdp 00:02:43.921 LINK cmb_copy 00:02:43.921 LINK dif 00:02:43.921 LINK hello_world 00:02:43.921 LINK memory_ut 00:02:43.921 LINK hotplug 00:02:43.921 LINK arbitration 00:02:43.921 LINK hello_blob 00:02:44.179 LINK reconnect 00:02:44.179 LINK abort 00:02:44.179 LINK nvme_manage 00:02:44.179 CC test/bdev/bdevio/bdevio.o 00:02:44.179 LINK accel_perf 00:02:44.437 LINK blobcli 00:02:44.694 CC examples/bdev/hello_world/hello_bdev.o 00:02:44.694 CC examples/bdev/bdevperf/bdevperf.o 00:02:44.694 LINK bdevio 00:02:44.952 LINK hello_bdev 00:02:44.952 LINK iscsi_fuzz 00:02:44.952 LINK cuse 00:02:45.517 LINK bdevperf 00:02:46.083 CC examples/nvmf/nvmf/nvmf.o 00:02:46.341 LINK nvmf 00:02:49.630 LINK esnap 00:02:50.198 00:02:50.198 real 1m15.812s 00:02:50.198 user 11m17.460s 00:02:50.198 sys 2m24.999s 00:02:50.198 21:46:09 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:50.198 21:46:09 make -- common/autotest_common.sh@10 -- $ set +x 00:02:50.198 ************************************ 00:02:50.198 END TEST make 00:02:50.198 ************************************ 00:02:50.198 21:46:09 -- common/autotest_common.sh@1142 -- $ return 0 00:02:50.198 21:46:09 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:50.198 21:46:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:50.198 21:46:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:50.198 21:46:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.198 21:46:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:50.198 21:46:09 -- pm/common@44 -- $ pid=3840588 00:02:50.198 21:46:09 -- pm/common@50 -- $ kill -TERM 3840588 00:02:50.198 21:46:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.198 21:46:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:50.198 21:46:09 -- pm/common@44 -- $ pid=3840590 00:02:50.198 21:46:09 -- pm/common@50 -- $ kill -TERM 3840590 00:02:50.198 21:46:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.198 21:46:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:50.198 21:46:09 -- pm/common@44 -- $ pid=3840592 00:02:50.198 21:46:09 -- pm/common@50 -- $ kill -TERM 3840592 00:02:50.198 21:46:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.198 21:46:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:50.198 21:46:09 -- pm/common@44 -- $ pid=3840621 00:02:50.198 21:46:09 -- pm/common@50 -- $ sudo -E kill -TERM 3840621 00:02:50.198 21:46:09 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:50.198 21:46:09 -- nvmf/common.sh@7 -- # uname -s 00:02:50.198 21:46:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:50.199 21:46:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:50.199 21:46:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:50.199 21:46:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:50.199 21:46:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:50.199 21:46:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:50.199 21:46:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:50.199 21:46:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:50.199 21:46:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:50.199 21:46:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:50.199 21:46:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:50.199 21:46:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:50.199 21:46:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:50.199 21:46:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:50.199 21:46:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:50.199 21:46:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:50.199 21:46:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:50.199 21:46:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:50.199 21:46:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:50.199 21:46:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:50.199 21:46:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.199 21:46:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.199 21:46:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.199 21:46:09 -- paths/export.sh@5 -- # export PATH 00:02:50.199 21:46:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:50.199 21:46:09 -- nvmf/common.sh@47 -- # : 0 00:02:50.199 21:46:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:50.199 21:46:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:50.199 21:46:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:50.199 21:46:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:50.199 21:46:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:50.199 21:46:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:50.199 21:46:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:50.199 21:46:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:50.199 21:46:09 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:50.199 21:46:09 -- spdk/autotest.sh@32 -- # uname -s 00:02:50.199 21:46:09 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:50.199 21:46:09 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:50.199 21:46:09 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:50.199 21:46:09 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:50.199 21:46:09 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:50.199 21:46:09 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:50.199 21:46:09 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:50.199 21:46:09 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:50.199 21:46:09 -- spdk/autotest.sh@48 -- # udevadm_pid=3899369 00:02:50.199 21:46:09 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:50.199 21:46:09 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:50.199 21:46:09 -- pm/common@17 -- # local monitor 00:02:50.199 21:46:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.199 21:46:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.199 21:46:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.199 21:46:09 -- pm/common@21 -- # date +%s 00:02:50.199 21:46:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:50.199 21:46:09 -- pm/common@21 -- # date +%s 00:02:50.199 21:46:09 -- pm/common@25 -- # sleep 1 00:02:50.199 21:46:09 -- pm/common@21 -- # date +%s 00:02:50.199 21:46:09 -- pm/common@21 -- # date +%s 00:02:50.199 21:46:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720899969 00:02:50.199 21:46:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720899969 00:02:50.199 21:46:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720899969 00:02:50.199 21:46:09 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720899969 00:02:50.199 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720899969_collect-vmstat.pm.log 00:02:50.199 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720899969_collect-cpu-load.pm.log 00:02:50.199 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720899969_collect-cpu-temp.pm.log 00:02:50.199 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720899969_collect-bmc-pm.bmc.pm.log 00:02:51.136 21:46:10 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:51.136 21:46:10 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:51.136 21:46:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:51.136 21:46:10 -- common/autotest_common.sh@10 -- # set +x 00:02:51.137 21:46:10 -- spdk/autotest.sh@59 -- # create_test_list 00:02:51.137 21:46:10 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:51.137 21:46:10 -- common/autotest_common.sh@10 -- # set +x 00:02:51.137 21:46:10 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:51.137 21:46:10 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:51.137 21:46:10 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:51.137 21:46:10 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:51.137 21:46:10 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:51.137 21:46:10 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:51.137 21:46:10 -- common/autotest_common.sh@1455 -- # uname 00:02:51.137 21:46:10 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:51.137 21:46:10 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:51.137 21:46:10 -- common/autotest_common.sh@1475 -- # uname 00:02:51.137 21:46:10 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:51.137 21:46:10 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:51.137 21:46:10 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:51.137 21:46:10 -- spdk/autotest.sh@72 -- # hash lcov 00:02:51.137 21:46:10 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:51.137 21:46:10 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:51.137 --rc lcov_branch_coverage=1 00:02:51.137 --rc lcov_function_coverage=1 00:02:51.137 --rc genhtml_branch_coverage=1 00:02:51.137 --rc genhtml_function_coverage=1 00:02:51.137 --rc genhtml_legend=1 00:02:51.137 --rc geninfo_all_blocks=1 00:02:51.137 ' 00:02:51.137 21:46:10 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:51.137 --rc lcov_branch_coverage=1 00:02:51.137 --rc lcov_function_coverage=1 00:02:51.137 --rc genhtml_branch_coverage=1 00:02:51.137 --rc genhtml_function_coverage=1 00:02:51.137 --rc genhtml_legend=1 00:02:51.137 --rc geninfo_all_blocks=1 00:02:51.137 ' 00:02:51.137 21:46:10 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:51.137 --rc lcov_branch_coverage=1 00:02:51.137 --rc lcov_function_coverage=1 00:02:51.137 --rc genhtml_branch_coverage=1 00:02:51.137 --rc genhtml_function_coverage=1 00:02:51.137 --rc genhtml_legend=1 00:02:51.137 --rc geninfo_all_blocks=1 00:02:51.137 --no-external' 00:02:51.137 21:46:10 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:51.137 --rc lcov_branch_coverage=1 00:02:51.137 --rc lcov_function_coverage=1 00:02:51.137 --rc genhtml_branch_coverage=1 00:02:51.137 --rc genhtml_function_coverage=1 00:02:51.137 --rc genhtml_legend=1 00:02:51.137 --rc geninfo_all_blocks=1 00:02:51.137 --no-external' 00:02:51.137 21:46:10 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:51.395 lcov: LCOV version 1.14 00:02:51.395 21:46:10 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:56.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:56.719 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:56.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:56.719 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:56.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:56.719 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:56.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:56.719 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:56.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:56.719 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:56.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:56.719 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:56.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:56.719 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:56.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:56.719 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:56.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:56.719 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:56.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:56.719 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:56.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:56.719 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:56.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:56.719 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:56.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:56.719 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:56.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:56.719 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:56.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:56.719 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:56.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:56.719 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:56.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:56.719 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:56.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:56.719 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:56.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:56.719 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:56.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:56.719 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:56.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:56.719 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:56.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:56.719 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:56.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:56.719 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:56.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:56.719 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:56.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:56.719 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:56.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:56.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:18.645 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:18.645 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:23.912 21:46:42 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:23.912 21:46:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:23.912 21:46:42 -- common/autotest_common.sh@10 -- # set +x 00:03:23.912 21:46:42 -- spdk/autotest.sh@91 -- # rm -f 00:03:23.912 21:46:42 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:24.478 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:24.478 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:24.478 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:24.478 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:24.478 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:24.478 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:24.478 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:24.478 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:24.478 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:24.478 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:24.736 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:24.736 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:24.736 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:24.736 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:24.736 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:24.736 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:24.736 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:24.736 21:46:44 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:24.736 21:46:44 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:24.736 21:46:44 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:24.736 21:46:44 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:24.736 21:46:44 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:24.736 21:46:44 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:24.736 21:46:44 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:24.736 21:46:44 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:24.736 21:46:44 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:24.736 21:46:44 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:24.737 21:46:44 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:24.737 21:46:44 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:24.737 21:46:44 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:24.737 21:46:44 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:24.737 21:46:44 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:24.737 No valid GPT data, bailing 00:03:24.737 21:46:44 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:24.996 21:46:44 -- scripts/common.sh@391 -- # pt= 00:03:24.996 21:46:44 -- scripts/common.sh@392 -- # return 1 00:03:24.996 21:46:44 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:24.996 1+0 records in 00:03:24.996 1+0 records out 00:03:24.996 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00162453 s, 645 MB/s 00:03:24.996 21:46:44 -- spdk/autotest.sh@118 -- # sync 00:03:24.996 21:46:44 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:24.996 21:46:44 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:24.996 21:46:44 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:26.898 21:46:46 -- spdk/autotest.sh@124 -- # uname -s 00:03:26.898 21:46:46 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:26.898 21:46:46 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:26.898 21:46:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.898 21:46:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.898 21:46:46 -- common/autotest_common.sh@10 -- # set +x 00:03:26.898 ************************************ 00:03:26.898 START TEST setup.sh 00:03:26.898 ************************************ 00:03:26.898 21:46:46 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:26.898 * Looking for test storage... 00:03:26.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:26.898 21:46:46 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:26.898 21:46:46 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:26.898 21:46:46 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:26.898 21:46:46 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.898 21:46:46 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.898 21:46:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:26.898 ************************************ 00:03:26.898 START TEST acl 00:03:26.898 ************************************ 00:03:26.898 21:46:46 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:26.898 * Looking for test storage... 00:03:26.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:26.898 21:46:46 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:26.898 21:46:46 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:26.898 21:46:46 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:26.898 21:46:46 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:26.898 21:46:46 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:26.898 21:46:46 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:26.898 21:46:46 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:26.898 21:46:46 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:26.898 21:46:46 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:26.898 21:46:46 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:26.898 21:46:46 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:26.898 21:46:46 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:26.898 21:46:46 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:26.898 21:46:46 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:26.898 21:46:46 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:26.898 21:46:46 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:28.274 21:46:47 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:28.274 21:46:47 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:28.274 21:46:47 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.274 21:46:47 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:28.274 21:46:47 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.274 21:46:47 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:29.650 Hugepages 00:03:29.650 node hugesize free / total 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.650 00:03:29.650 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:29.650 21:46:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:29.651 21:46:48 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:29.651 21:46:48 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:29.651 21:46:48 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.651 21:46:48 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:29.651 ************************************ 00:03:29.651 START TEST denied 00:03:29.651 ************************************ 00:03:29.651 21:46:48 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:29.651 21:46:48 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:03:29.651 21:46:48 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:29.651 21:46:48 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:03:29.651 21:46:48 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.651 21:46:48 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:31.056 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:03:31.056 21:46:50 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:03:31.056 21:46:50 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:31.056 21:46:50 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:31.056 21:46:50 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:03:31.056 21:46:50 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:03:31.056 21:46:50 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:31.056 21:46:50 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:31.056 21:46:50 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:31.056 21:46:50 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:31.056 21:46:50 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:33.589 00:03:33.589 real 0m3.917s 00:03:33.589 user 0m1.157s 00:03:33.589 sys 0m1.836s 00:03:33.589 21:46:52 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.589 21:46:52 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:33.589 ************************************ 00:03:33.589 END TEST denied 00:03:33.589 ************************************ 00:03:33.589 21:46:52 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:33.589 21:46:52 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:33.589 21:46:52 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.589 21:46:52 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.589 21:46:52 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:33.589 ************************************ 00:03:33.589 START TEST allowed 00:03:33.589 ************************************ 00:03:33.589 21:46:52 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:33.589 21:46:52 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:33.589 21:46:52 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:33.589 21:46:52 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:33.589 21:46:52 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.589 21:46:52 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:36.121 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:36.121 21:46:55 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:36.121 21:46:55 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:36.121 21:46:55 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:36.121 21:46:55 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.121 21:46:55 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:37.497 00:03:37.497 real 0m3.790s 00:03:37.497 user 0m0.950s 00:03:37.497 sys 0m1.669s 00:03:37.497 21:46:56 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:37.497 21:46:56 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:37.497 ************************************ 00:03:37.497 END TEST allowed 00:03:37.497 ************************************ 00:03:37.497 21:46:56 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:37.497 00:03:37.497 real 0m10.533s 00:03:37.497 user 0m3.201s 00:03:37.497 sys 0m5.305s 00:03:37.497 21:46:56 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:37.497 21:46:56 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:37.497 ************************************ 00:03:37.497 END TEST acl 00:03:37.497 ************************************ 00:03:37.497 21:46:56 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:37.497 21:46:56 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:37.497 21:46:56 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:37.497 21:46:56 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.497 21:46:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:37.497 ************************************ 00:03:37.497 START TEST hugepages 00:03:37.497 ************************************ 00:03:37.497 21:46:56 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:37.497 * Looking for test storage... 00:03:37.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:37.497 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:37.497 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:37.497 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:37.497 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:37.497 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:37.497 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:37.497 21:46:56 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:37.497 21:46:56 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:37.497 21:46:56 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:37.497 21:46:56 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43444484 kB' 'MemAvailable: 46950328 kB' 'Buffers: 2704 kB' 'Cached: 10514324 kB' 'SwapCached: 0 kB' 'Active: 7522152 kB' 'Inactive: 3506552 kB' 'Active(anon): 7127800 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514972 kB' 'Mapped: 213700 kB' 'Shmem: 6616124 kB' 'KReclaimable: 197128 kB' 'Slab: 567908 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 370780 kB' 'KernelStack: 12784 kB' 'PageTables: 8572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 8246492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.498 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:37.499 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.500 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:37.500 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:37.500 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:37.500 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:37.500 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:37.500 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:37.500 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:37.500 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:37.500 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:37.500 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:37.500 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:37.500 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:37.500 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:37.500 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:37.500 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:37.500 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:37.500 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:37.500 21:46:56 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:37.500 21:46:56 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:37.500 21:46:56 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.500 21:46:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:37.500 ************************************ 00:03:37.500 START TEST default_setup 00:03:37.500 ************************************ 00:03:37.500 21:46:56 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:37.500 21:46:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:37.500 21:46:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:37.500 21:46:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:37.500 21:46:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:37.500 21:46:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:37.500 21:46:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:37.500 21:46:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:37.500 21:46:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:37.500 21:46:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:37.500 21:46:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:37.500 21:46:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:37.500 21:46:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:37.500 21:46:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:37.500 21:46:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:37.500 21:46:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:37.500 21:46:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:37.500 21:46:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:37.500 21:46:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:37.500 21:46:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:37.500 21:46:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:37.500 21:46:56 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.500 21:46:56 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:38.875 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:38.875 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:38.875 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:38.875 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:38.875 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:38.875 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:38.875 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:38.875 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:38.875 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:38.875 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:38.875 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:38.875 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:38.875 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:38.875 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:38.875 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:38.875 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:39.816 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:39.816 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:39.816 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:39.816 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:39.816 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:39.816 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:39.816 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:39.816 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:39.816 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:39.816 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45448632 kB' 'MemAvailable: 48954476 kB' 'Buffers: 2704 kB' 'Cached: 10514416 kB' 'SwapCached: 0 kB' 'Active: 7540116 kB' 'Inactive: 3506552 kB' 'Active(anon): 7145764 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532804 kB' 'Mapped: 213840 kB' 'Shmem: 6616216 kB' 'KReclaimable: 197128 kB' 'Slab: 567376 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 370248 kB' 'KernelStack: 12752 kB' 'PageTables: 8244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8267124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.817 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45451244 kB' 'MemAvailable: 48957088 kB' 'Buffers: 2704 kB' 'Cached: 10514416 kB' 'SwapCached: 0 kB' 'Active: 7540292 kB' 'Inactive: 3506552 kB' 'Active(anon): 7145940 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533000 kB' 'Mapped: 213796 kB' 'Shmem: 6616216 kB' 'KReclaimable: 197128 kB' 'Slab: 567364 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 370236 kB' 'KernelStack: 12816 kB' 'PageTables: 8380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8267144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.818 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.819 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45452172 kB' 'MemAvailable: 48958016 kB' 'Buffers: 2704 kB' 'Cached: 10514432 kB' 'SwapCached: 0 kB' 'Active: 7539776 kB' 'Inactive: 3506552 kB' 'Active(anon): 7145424 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532452 kB' 'Mapped: 213736 kB' 'Shmem: 6616232 kB' 'KReclaimable: 197128 kB' 'Slab: 567380 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 370252 kB' 'KernelStack: 12784 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8267164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.820 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.821 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:39.822 nr_hugepages=1024 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:39.822 resv_hugepages=0 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:39.822 surplus_hugepages=0 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:39.822 anon_hugepages=0 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45453616 kB' 'MemAvailable: 48959460 kB' 'Buffers: 2704 kB' 'Cached: 10514456 kB' 'SwapCached: 0 kB' 'Active: 7540292 kB' 'Inactive: 3506552 kB' 'Active(anon): 7145940 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532944 kB' 'Mapped: 213736 kB' 'Shmem: 6616256 kB' 'KReclaimable: 197128 kB' 'Slab: 567372 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 370244 kB' 'KernelStack: 12800 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8267184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.822 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.823 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:39.824 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27120024 kB' 'MemUsed: 5709860 kB' 'SwapCached: 0 kB' 'Active: 2468540 kB' 'Inactive: 108696 kB' 'Active(anon): 2357652 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2332276 kB' 'Mapped: 89840 kB' 'AnonPages: 248176 kB' 'Shmem: 2112692 kB' 'KernelStack: 7224 kB' 'PageTables: 4648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91976 kB' 'Slab: 304732 kB' 'SReclaimable: 91976 kB' 'SUnreclaim: 212756 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:40.085 node0=1024 expecting 1024 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:40.085 00:03:40.085 real 0m2.407s 00:03:40.085 user 0m0.644s 00:03:40.085 sys 0m0.852s 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.085 21:46:59 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:40.085 ************************************ 00:03:40.085 END TEST default_setup 00:03:40.085 ************************************ 00:03:40.085 21:46:59 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:40.085 21:46:59 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:40.085 21:46:59 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.085 21:46:59 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.085 21:46:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:40.085 ************************************ 00:03:40.085 START TEST per_node_1G_alloc 00:03:40.085 ************************************ 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.085 21:46:59 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:41.467 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:41.467 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:41.467 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:41.467 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:41.467 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:41.467 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:41.467 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:41.467 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:41.467 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:41.467 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:41.467 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:41.467 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:41.467 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:41.467 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:41.467 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:41.467 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:41.467 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45440000 kB' 'MemAvailable: 48945844 kB' 'Buffers: 2704 kB' 'Cached: 10514532 kB' 'SwapCached: 0 kB' 'Active: 7540632 kB' 'Inactive: 3506552 kB' 'Active(anon): 7146280 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533096 kB' 'Mapped: 213868 kB' 'Shmem: 6616332 kB' 'KReclaimable: 197128 kB' 'Slab: 567420 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 370292 kB' 'KernelStack: 12752 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8267368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.467 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.468 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45446904 kB' 'MemAvailable: 48952748 kB' 'Buffers: 2704 kB' 'Cached: 10514536 kB' 'SwapCached: 0 kB' 'Active: 7540208 kB' 'Inactive: 3506552 kB' 'Active(anon): 7145856 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532648 kB' 'Mapped: 213752 kB' 'Shmem: 6616336 kB' 'KReclaimable: 197128 kB' 'Slab: 567408 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 370280 kB' 'KernelStack: 12784 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8267388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.469 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.470 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45447512 kB' 'MemAvailable: 48953356 kB' 'Buffers: 2704 kB' 'Cached: 10514552 kB' 'SwapCached: 0 kB' 'Active: 7540448 kB' 'Inactive: 3506552 kB' 'Active(anon): 7146096 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532860 kB' 'Mapped: 213752 kB' 'Shmem: 6616352 kB' 'KReclaimable: 197128 kB' 'Slab: 567408 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 370280 kB' 'KernelStack: 12816 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8267408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.471 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.472 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:41.473 nr_hugepages=1024 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:41.473 resv_hugepages=0 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:41.473 surplus_hugepages=0 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:41.473 anon_hugepages=0 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45454992 kB' 'MemAvailable: 48960836 kB' 'Buffers: 2704 kB' 'Cached: 10514576 kB' 'SwapCached: 0 kB' 'Active: 7540456 kB' 'Inactive: 3506552 kB' 'Active(anon): 7146104 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532896 kB' 'Mapped: 213752 kB' 'Shmem: 6616376 kB' 'KReclaimable: 197128 kB' 'Slab: 567400 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 370272 kB' 'KernelStack: 12832 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8267432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.473 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.474 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28169908 kB' 'MemUsed: 4659976 kB' 'SwapCached: 0 kB' 'Active: 2468796 kB' 'Inactive: 108696 kB' 'Active(anon): 2357908 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2332392 kB' 'Mapped: 89856 kB' 'AnonPages: 248268 kB' 'Shmem: 2112808 kB' 'KernelStack: 7272 kB' 'PageTables: 4552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91976 kB' 'Slab: 304684 kB' 'SReclaimable: 91976 kB' 'SUnreclaim: 212708 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.475 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:41.476 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 17284328 kB' 'MemUsed: 10427496 kB' 'SwapCached: 0 kB' 'Active: 5072072 kB' 'Inactive: 3397856 kB' 'Active(anon): 4788608 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3397856 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8184912 kB' 'Mapped: 123896 kB' 'AnonPages: 285020 kB' 'Shmem: 4503592 kB' 'KernelStack: 5544 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105152 kB' 'Slab: 262712 kB' 'SReclaimable: 105152 kB' 'SUnreclaim: 157560 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.477 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:41.478 node0=512 expecting 512 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:41.478 node1=512 expecting 512 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:41.478 00:03:41.478 real 0m1.504s 00:03:41.478 user 0m0.615s 00:03:41.478 sys 0m0.852s 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.478 21:47:00 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:41.478 ************************************ 00:03:41.478 END TEST per_node_1G_alloc 00:03:41.478 ************************************ 00:03:41.478 21:47:00 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:41.478 21:47:00 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:41.478 21:47:00 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.478 21:47:00 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.478 21:47:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:41.478 ************************************ 00:03:41.478 START TEST even_2G_alloc 00:03:41.478 ************************************ 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.478 21:47:00 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:42.873 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:42.873 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:42.873 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:42.873 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:42.873 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:42.873 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:42.873 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:42.873 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:42.873 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:42.873 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:42.873 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:42.873 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:42.873 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:42.873 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:42.873 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:42.873 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:42.873 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45443476 kB' 'MemAvailable: 48949320 kB' 'Buffers: 2704 kB' 'Cached: 10514672 kB' 'SwapCached: 0 kB' 'Active: 7541948 kB' 'Inactive: 3506552 kB' 'Active(anon): 7147596 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534348 kB' 'Mapped: 214248 kB' 'Shmem: 6616472 kB' 'KReclaimable: 197128 kB' 'Slab: 567332 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 370204 kB' 'KernelStack: 12864 kB' 'PageTables: 8376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8270540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.873 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.874 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45445508 kB' 'MemAvailable: 48951352 kB' 'Buffers: 2704 kB' 'Cached: 10514672 kB' 'SwapCached: 0 kB' 'Active: 7545496 kB' 'Inactive: 3506552 kB' 'Active(anon): 7151144 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537980 kB' 'Mapped: 214252 kB' 'Shmem: 6616472 kB' 'KReclaimable: 197128 kB' 'Slab: 567324 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 370196 kB' 'KernelStack: 12880 kB' 'PageTables: 8408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8272712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.875 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.876 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45443876 kB' 'MemAvailable: 48949720 kB' 'Buffers: 2704 kB' 'Cached: 10514692 kB' 'SwapCached: 0 kB' 'Active: 7546044 kB' 'Inactive: 3506552 kB' 'Active(anon): 7151692 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538488 kB' 'Mapped: 214692 kB' 'Shmem: 6616492 kB' 'KReclaimable: 197128 kB' 'Slab: 567416 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 370288 kB' 'KernelStack: 12848 kB' 'PageTables: 8320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8273800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196036 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.877 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.878 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:42.879 nr_hugepages=1024 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:42.879 resv_hugepages=0 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:42.879 surplus_hugepages=0 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:42.879 anon_hugepages=0 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45440968 kB' 'MemAvailable: 48946812 kB' 'Buffers: 2704 kB' 'Cached: 10514720 kB' 'SwapCached: 0 kB' 'Active: 7543216 kB' 'Inactive: 3506552 kB' 'Active(anon): 7148864 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535644 kB' 'Mapped: 214200 kB' 'Shmem: 6616520 kB' 'KReclaimable: 197128 kB' 'Slab: 567424 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 370296 kB' 'KernelStack: 12832 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8271172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.879 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:42.880 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28149616 kB' 'MemUsed: 4680268 kB' 'SwapCached: 0 kB' 'Active: 2473676 kB' 'Inactive: 108696 kB' 'Active(anon): 2362788 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2332452 kB' 'Mapped: 90568 kB' 'AnonPages: 253104 kB' 'Shmem: 2112868 kB' 'KernelStack: 7272 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91976 kB' 'Slab: 304800 kB' 'SReclaimable: 91976 kB' 'SUnreclaim: 212824 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.881 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 17287872 kB' 'MemUsed: 10423952 kB' 'SwapCached: 0 kB' 'Active: 5072556 kB' 'Inactive: 3397856 kB' 'Active(anon): 4789092 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3397856 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8185012 kB' 'Mapped: 124048 kB' 'AnonPages: 285548 kB' 'Shmem: 4503692 kB' 'KernelStack: 5576 kB' 'PageTables: 3824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105152 kB' 'Slab: 262624 kB' 'SReclaimable: 105152 kB' 'SUnreclaim: 157472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.882 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.883 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:42.884 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:42.884 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:42.884 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.884 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:42.884 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:42.884 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.884 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.884 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.884 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.884 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:42.884 node0=512 expecting 512 00:03:42.884 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.884 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.884 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.884 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:42.884 node1=512 expecting 512 00:03:42.884 21:47:02 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:42.884 00:03:42.884 real 0m1.389s 00:03:42.884 user 0m0.607s 00:03:42.884 sys 0m0.741s 00:03:42.884 21:47:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.884 21:47:02 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:42.884 ************************************ 00:03:42.884 END TEST even_2G_alloc 00:03:42.884 ************************************ 00:03:42.884 21:47:02 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:42.884 21:47:02 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:42.884 21:47:02 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.884 21:47:02 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.884 21:47:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:42.884 ************************************ 00:03:42.884 START TEST odd_alloc 00:03:42.884 ************************************ 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.884 21:47:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:44.265 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:44.265 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:44.265 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:44.265 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:44.265 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:44.265 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:44.265 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:44.265 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:44.265 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:44.265 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:44.265 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:44.265 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:44.265 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:44.265 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:44.265 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:44.265 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:44.265 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45538364 kB' 'MemAvailable: 49044208 kB' 'Buffers: 2704 kB' 'Cached: 10514808 kB' 'SwapCached: 0 kB' 'Active: 7537600 kB' 'Inactive: 3506552 kB' 'Active(anon): 7143248 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529928 kB' 'Mapped: 212912 kB' 'Shmem: 6616608 kB' 'KReclaimable: 197128 kB' 'Slab: 566944 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 369816 kB' 'KernelStack: 12768 kB' 'PageTables: 7892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 8254976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.265 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.266 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45542916 kB' 'MemAvailable: 49048760 kB' 'Buffers: 2704 kB' 'Cached: 10514808 kB' 'SwapCached: 0 kB' 'Active: 7539036 kB' 'Inactive: 3506552 kB' 'Active(anon): 7144684 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531324 kB' 'Mapped: 212912 kB' 'Shmem: 6616608 kB' 'KReclaimable: 197128 kB' 'Slab: 566952 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 369824 kB' 'KernelStack: 13200 kB' 'PageTables: 8700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 8254996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196272 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.267 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.268 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45542704 kB' 'MemAvailable: 49048548 kB' 'Buffers: 2704 kB' 'Cached: 10514812 kB' 'SwapCached: 0 kB' 'Active: 7538632 kB' 'Inactive: 3506552 kB' 'Active(anon): 7144280 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530800 kB' 'Mapped: 212904 kB' 'Shmem: 6616612 kB' 'KReclaimable: 197128 kB' 'Slab: 566992 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 369864 kB' 'KernelStack: 13200 kB' 'PageTables: 8796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 8255016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196288 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.269 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:44.270 nr_hugepages=1025 00:03:44.270 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.271 resv_hugepages=0 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.271 surplus_hugepages=0 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.271 anon_hugepages=0 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45541088 kB' 'MemAvailable: 49046932 kB' 'Buffers: 2704 kB' 'Cached: 10514848 kB' 'SwapCached: 0 kB' 'Active: 7539692 kB' 'Inactive: 3506552 kB' 'Active(anon): 7145340 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531876 kB' 'Mapped: 212904 kB' 'Shmem: 6616648 kB' 'KReclaimable: 197128 kB' 'Slab: 566992 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 369864 kB' 'KernelStack: 13344 kB' 'PageTables: 10224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 8256400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196416 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.271 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.272 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28252584 kB' 'MemUsed: 4577300 kB' 'SwapCached: 0 kB' 'Active: 2468900 kB' 'Inactive: 108696 kB' 'Active(anon): 2358012 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2332460 kB' 'Mapped: 89140 kB' 'AnonPages: 248264 kB' 'Shmem: 2112876 kB' 'KernelStack: 7400 kB' 'PageTables: 4852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91976 kB' 'Slab: 304636 kB' 'SReclaimable: 91976 kB' 'SUnreclaim: 212660 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.273 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 17288688 kB' 'MemUsed: 10423136 kB' 'SwapCached: 0 kB' 'Active: 5069328 kB' 'Inactive: 3397856 kB' 'Active(anon): 4785864 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3397856 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8185132 kB' 'Mapped: 123756 kB' 'AnonPages: 282100 kB' 'Shmem: 4503812 kB' 'KernelStack: 5416 kB' 'PageTables: 3108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105152 kB' 'Slab: 262356 kB' 'SReclaimable: 105152 kB' 'SUnreclaim: 157204 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.274 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:44.275 node0=512 expecting 513 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:44.275 node1=513 expecting 512 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:44.275 00:03:44.275 real 0m1.386s 00:03:44.275 user 0m0.569s 00:03:44.275 sys 0m0.771s 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.275 21:47:03 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:44.275 ************************************ 00:03:44.275 END TEST odd_alloc 00:03:44.275 ************************************ 00:03:44.533 21:47:03 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:44.533 21:47:03 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:44.533 21:47:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.533 21:47:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.533 21:47:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:44.533 ************************************ 00:03:44.533 START TEST custom_alloc 00:03:44.533 ************************************ 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:44.533 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:44.534 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:44.534 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:44.534 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:44.534 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:44.534 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:44.534 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:44.534 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.534 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:44.534 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:44.534 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.534 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.534 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:44.534 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:44.534 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:44.534 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:44.534 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:44.534 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:44.534 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:44.534 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:44.534 21:47:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:44.534 21:47:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.534 21:47:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:45.466 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:45.466 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:45.466 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:45.466 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:45.466 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:45.466 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:45.466 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:45.466 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:45.466 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:45.466 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:45.466 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:45.466 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:45.466 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:45.747 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:45.747 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:45.747 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:45.747 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44500832 kB' 'MemAvailable: 48006676 kB' 'Buffers: 2704 kB' 'Cached: 10514932 kB' 'SwapCached: 0 kB' 'Active: 7537432 kB' 'Inactive: 3506552 kB' 'Active(anon): 7143080 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529500 kB' 'Mapped: 212912 kB' 'Shmem: 6616732 kB' 'KReclaimable: 197128 kB' 'Slab: 566968 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 369840 kB' 'KernelStack: 12768 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 8254104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.747 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.748 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44505028 kB' 'MemAvailable: 48010872 kB' 'Buffers: 2704 kB' 'Cached: 10514932 kB' 'SwapCached: 0 kB' 'Active: 7538524 kB' 'Inactive: 3506552 kB' 'Active(anon): 7144172 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530612 kB' 'Mapped: 212920 kB' 'Shmem: 6616732 kB' 'KReclaimable: 197128 kB' 'Slab: 566960 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 369832 kB' 'KernelStack: 12832 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 8253756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.749 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.750 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44504080 kB' 'MemAvailable: 48009924 kB' 'Buffers: 2704 kB' 'Cached: 10514952 kB' 'SwapCached: 0 kB' 'Active: 7538492 kB' 'Inactive: 3506552 kB' 'Active(anon): 7144140 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530640 kB' 'Mapped: 212920 kB' 'Shmem: 6616752 kB' 'KReclaimable: 197128 kB' 'Slab: 567000 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 369872 kB' 'KernelStack: 12832 kB' 'PageTables: 7996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 8253780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.751 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.752 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:45.753 nr_hugepages=1536 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.753 resv_hugepages=0 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.753 surplus_hugepages=0 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.753 anon_hugepages=0 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44505216 kB' 'MemAvailable: 48011060 kB' 'Buffers: 2704 kB' 'Cached: 10514972 kB' 'SwapCached: 0 kB' 'Active: 7537900 kB' 'Inactive: 3506552 kB' 'Active(anon): 7143548 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529984 kB' 'Mapped: 212920 kB' 'Shmem: 6616772 kB' 'KReclaimable: 197128 kB' 'Slab: 567000 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 369872 kB' 'KernelStack: 12720 kB' 'PageTables: 7660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 8253928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:45.753 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.022 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.023 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28274208 kB' 'MemUsed: 4555676 kB' 'SwapCached: 0 kB' 'Active: 2468860 kB' 'Inactive: 108696 kB' 'Active(anon): 2357972 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2332476 kB' 'Mapped: 89160 kB' 'AnonPages: 248268 kB' 'Shmem: 2112892 kB' 'KernelStack: 7288 kB' 'PageTables: 4560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91976 kB' 'Slab: 304656 kB' 'SReclaimable: 91976 kB' 'SUnreclaim: 212680 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.024 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16231188 kB' 'MemUsed: 11480636 kB' 'SwapCached: 0 kB' 'Active: 5068956 kB' 'Inactive: 3397856 kB' 'Active(anon): 4785492 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3397856 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8185232 kB' 'Mapped: 123760 kB' 'AnonPages: 281624 kB' 'Shmem: 4503912 kB' 'KernelStack: 5512 kB' 'PageTables: 3360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105152 kB' 'Slab: 262312 kB' 'SReclaimable: 105152 kB' 'SUnreclaim: 157160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.025 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:46.026 node0=512 expecting 512 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:46.026 node1=1024 expecting 1024 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:46.026 00:03:46.026 real 0m1.507s 00:03:46.026 user 0m0.632s 00:03:46.026 sys 0m0.835s 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.026 21:47:05 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:46.026 ************************************ 00:03:46.026 END TEST custom_alloc 00:03:46.026 ************************************ 00:03:46.026 21:47:05 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:46.026 21:47:05 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:46.026 21:47:05 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.026 21:47:05 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.026 21:47:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.026 ************************************ 00:03:46.026 START TEST no_shrink_alloc 00:03:46.026 ************************************ 00:03:46.026 21:47:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:46.026 21:47:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:46.026 21:47:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:46.026 21:47:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:46.027 21:47:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:46.027 21:47:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:46.027 21:47:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:46.027 21:47:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.027 21:47:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:46.027 21:47:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:46.027 21:47:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:46.027 21:47:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.027 21:47:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:46.027 21:47:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:46.027 21:47:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.027 21:47:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.027 21:47:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:46.027 21:47:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:46.027 21:47:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:46.027 21:47:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:46.027 21:47:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:46.027 21:47:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.027 21:47:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:46.961 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:46.961 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:46.961 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:46.961 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:46.961 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:46.961 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:46.961 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:46.961 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:46.961 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:46.961 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:46.961 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:46.961 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:46.961 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:46.961 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:46.961 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:46.962 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:46.962 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45502000 kB' 'MemAvailable: 49007844 kB' 'Buffers: 2704 kB' 'Cached: 10515064 kB' 'SwapCached: 0 kB' 'Active: 7538184 kB' 'Inactive: 3506552 kB' 'Active(anon): 7143832 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530136 kB' 'Mapped: 212988 kB' 'Shmem: 6616864 kB' 'KReclaimable: 197128 kB' 'Slab: 566756 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 369628 kB' 'KernelStack: 12816 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8254528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.226 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45502416 kB' 'MemAvailable: 49008260 kB' 'Buffers: 2704 kB' 'Cached: 10515064 kB' 'SwapCached: 0 kB' 'Active: 7537872 kB' 'Inactive: 3506552 kB' 'Active(anon): 7143520 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529812 kB' 'Mapped: 212936 kB' 'Shmem: 6616864 kB' 'KReclaimable: 197128 kB' 'Slab: 566756 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 369628 kB' 'KernelStack: 12816 kB' 'PageTables: 7844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8254544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.227 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.228 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45502416 kB' 'MemAvailable: 49008260 kB' 'Buffers: 2704 kB' 'Cached: 10515084 kB' 'SwapCached: 0 kB' 'Active: 7537996 kB' 'Inactive: 3506552 kB' 'Active(anon): 7143644 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529960 kB' 'Mapped: 212936 kB' 'Shmem: 6616884 kB' 'KReclaimable: 197128 kB' 'Slab: 566808 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 369680 kB' 'KernelStack: 12816 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8254568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.229 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.230 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:47.231 nr_hugepages=1024 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.231 resv_hugepages=0 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.231 surplus_hugepages=0 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.231 anon_hugepages=0 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45503068 kB' 'MemAvailable: 49008912 kB' 'Buffers: 2704 kB' 'Cached: 10515108 kB' 'SwapCached: 0 kB' 'Active: 7538064 kB' 'Inactive: 3506552 kB' 'Active(anon): 7143712 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530028 kB' 'Mapped: 212936 kB' 'Shmem: 6616908 kB' 'KReclaimable: 197128 kB' 'Slab: 566804 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 369676 kB' 'KernelStack: 12864 kB' 'PageTables: 7964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8254588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.231 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.232 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27215424 kB' 'MemUsed: 5614460 kB' 'SwapCached: 0 kB' 'Active: 2468572 kB' 'Inactive: 108696 kB' 'Active(anon): 2357684 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2332584 kB' 'Mapped: 89172 kB' 'AnonPages: 247824 kB' 'Shmem: 2113000 kB' 'KernelStack: 7352 kB' 'PageTables: 4512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91976 kB' 'Slab: 304508 kB' 'SReclaimable: 91976 kB' 'SUnreclaim: 212532 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.233 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.234 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:47.235 node0=1024 expecting 1024 00:03:47.235 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:47.235 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:47.235 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:47.235 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:47.235 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.235 21:47:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:48.614 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:48.614 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:48.614 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:48.614 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:48.614 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:48.614 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:48.614 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:48.615 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:48.615 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:48.615 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:48.615 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:48.615 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:48.615 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:48.615 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:48.615 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:48.615 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:48.615 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:48.615 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45498180 kB' 'MemAvailable: 49004024 kB' 'Buffers: 2704 kB' 'Cached: 10515172 kB' 'SwapCached: 0 kB' 'Active: 7543960 kB' 'Inactive: 3506552 kB' 'Active(anon): 7149608 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535840 kB' 'Mapped: 213908 kB' 'Shmem: 6616972 kB' 'KReclaimable: 197128 kB' 'Slab: 566896 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 369768 kB' 'KernelStack: 12848 kB' 'PageTables: 7948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8260888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196196 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.615 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45497044 kB' 'MemAvailable: 49002888 kB' 'Buffers: 2704 kB' 'Cached: 10515176 kB' 'SwapCached: 0 kB' 'Active: 7538980 kB' 'Inactive: 3506552 kB' 'Active(anon): 7144628 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530888 kB' 'Mapped: 213860 kB' 'Shmem: 6616976 kB' 'KReclaimable: 197128 kB' 'Slab: 566884 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 369756 kB' 'KernelStack: 12896 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8256140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.616 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.617 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45492760 kB' 'MemAvailable: 48998604 kB' 'Buffers: 2704 kB' 'Cached: 10515196 kB' 'SwapCached: 0 kB' 'Active: 7542020 kB' 'Inactive: 3506552 kB' 'Active(anon): 7147668 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533840 kB' 'Mapped: 213380 kB' 'Shmem: 6616996 kB' 'KReclaimable: 197128 kB' 'Slab: 566884 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 369756 kB' 'KernelStack: 12832 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8259464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.618 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.619 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:48.620 nr_hugepages=1024 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.620 resv_hugepages=0 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.620 surplus_hugepages=0 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.620 anon_hugepages=0 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45492760 kB' 'MemAvailable: 48998604 kB' 'Buffers: 2704 kB' 'Cached: 10515216 kB' 'SwapCached: 0 kB' 'Active: 7543912 kB' 'Inactive: 3506552 kB' 'Active(anon): 7149560 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535724 kB' 'Mapped: 213796 kB' 'Shmem: 6617016 kB' 'KReclaimable: 197128 kB' 'Slab: 566916 kB' 'SReclaimable: 197128 kB' 'SUnreclaim: 369788 kB' 'KernelStack: 12880 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 8260948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196164 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1908316 kB' 'DirectMap2M: 15837184 kB' 'DirectMap1G: 51380224 kB' 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.620 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.621 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27197440 kB' 'MemUsed: 5632444 kB' 'SwapCached: 0 kB' 'Active: 2467900 kB' 'Inactive: 108696 kB' 'Active(anon): 2357012 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2332640 kB' 'Mapped: 89444 kB' 'AnonPages: 247072 kB' 'Shmem: 2113056 kB' 'KernelStack: 7336 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91976 kB' 'Slab: 304484 kB' 'SReclaimable: 91976 kB' 'SUnreclaim: 212508 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.622 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.623 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.624 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.624 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:48.624 node0=1024 expecting 1024 00:03:48.624 21:47:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:48.624 00:03:48.624 real 0m2.728s 00:03:48.624 user 0m1.115s 00:03:48.624 sys 0m1.526s 00:03:48.624 21:47:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.624 21:47:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:48.624 ************************************ 00:03:48.624 END TEST no_shrink_alloc 00:03:48.624 ************************************ 00:03:48.624 21:47:07 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:48.624 21:47:07 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:48.624 21:47:07 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:48.624 21:47:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:48.624 21:47:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.624 21:47:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:48.624 21:47:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.624 21:47:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:48.624 21:47:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:48.624 21:47:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.624 21:47:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:48.624 21:47:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:48.624 21:47:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:48.624 21:47:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:48.624 21:47:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:48.624 00:03:48.624 real 0m11.291s 00:03:48.624 user 0m4.350s 00:03:48.624 sys 0m5.799s 00:03:48.624 21:47:07 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.624 21:47:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:48.624 ************************************ 00:03:48.624 END TEST hugepages 00:03:48.624 ************************************ 00:03:48.881 21:47:08 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:48.881 21:47:08 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:48.881 21:47:08 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.881 21:47:08 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.881 21:47:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:48.881 ************************************ 00:03:48.881 START TEST driver 00:03:48.881 ************************************ 00:03:48.881 21:47:08 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:48.881 * Looking for test storage... 00:03:48.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:48.881 21:47:08 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:48.881 21:47:08 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.881 21:47:08 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:51.460 21:47:10 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:51.460 21:47:10 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.460 21:47:10 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.460 21:47:10 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:51.460 ************************************ 00:03:51.460 START TEST guess_driver 00:03:51.460 ************************************ 00:03:51.460 21:47:10 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:51.460 21:47:10 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:51.460 21:47:10 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:51.460 21:47:10 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:51.460 21:47:10 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:51.460 21:47:10 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:51.460 21:47:10 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:51.460 21:47:10 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:51.460 21:47:10 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:51.460 21:47:10 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:51.460 21:47:10 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:51.460 21:47:10 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:51.460 21:47:10 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:51.460 21:47:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:51.460 21:47:10 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:51.460 21:47:10 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:51.460 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:51.460 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:51.460 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:51.460 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:51.460 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:51.460 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:51.460 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:51.460 21:47:10 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:51.460 21:47:10 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:51.460 21:47:10 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:51.460 21:47:10 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:51.460 21:47:10 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:51.460 Looking for driver=vfio-pci 00:03:51.460 21:47:10 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:51.460 21:47:10 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:51.460 21:47:10 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.460 21:47:10 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:52.396 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.396 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.396 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.396 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.396 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.396 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.396 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.396 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.396 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:52.656 21:47:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.595 21:47:12 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:53.595 21:47:12 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:53.595 21:47:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:53.595 21:47:12 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:53.595 21:47:12 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:53.595 21:47:12 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:53.595 21:47:12 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:56.124 00:03:56.124 real 0m4.803s 00:03:56.124 user 0m1.086s 00:03:56.124 sys 0m1.819s 00:03:56.124 21:47:15 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.125 21:47:15 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:56.125 ************************************ 00:03:56.125 END TEST guess_driver 00:03:56.125 ************************************ 00:03:56.125 21:47:15 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:56.125 00:03:56.125 real 0m7.355s 00:03:56.125 user 0m1.653s 00:03:56.125 sys 0m2.823s 00:03:56.125 21:47:15 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.125 21:47:15 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:56.125 ************************************ 00:03:56.125 END TEST driver 00:03:56.125 ************************************ 00:03:56.125 21:47:15 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:56.125 21:47:15 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:56.125 21:47:15 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.125 21:47:15 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.125 21:47:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:56.125 ************************************ 00:03:56.125 START TEST devices 00:03:56.125 ************************************ 00:03:56.125 21:47:15 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:56.125 * Looking for test storage... 00:03:56.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:56.125 21:47:15 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:56.125 21:47:15 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:56.125 21:47:15 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:56.125 21:47:15 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.026 21:47:16 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:58.026 21:47:16 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:58.026 21:47:16 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:58.026 21:47:16 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:58.026 21:47:16 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:58.026 21:47:16 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:58.026 21:47:16 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:58.026 21:47:16 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:58.026 21:47:16 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:58.026 21:47:16 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:58.026 21:47:16 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:58.026 21:47:16 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:58.026 21:47:16 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:58.026 21:47:16 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:58.026 21:47:16 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:58.026 21:47:16 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:58.026 21:47:16 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:58.026 21:47:16 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:58.026 21:47:16 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:58.026 21:47:16 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:58.026 21:47:16 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:58.026 21:47:16 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:58.026 No valid GPT data, bailing 00:03:58.026 21:47:16 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:58.026 21:47:16 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:58.026 21:47:16 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:58.026 21:47:16 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:58.026 21:47:16 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:58.026 21:47:16 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:58.026 21:47:16 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:58.026 21:47:16 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:58.026 21:47:16 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:58.026 21:47:16 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:58.026 21:47:16 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:58.026 21:47:16 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:58.026 21:47:16 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:58.026 21:47:16 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.026 21:47:16 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.026 21:47:16 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:58.026 ************************************ 00:03:58.026 START TEST nvme_mount 00:03:58.026 ************************************ 00:03:58.026 21:47:16 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:58.026 21:47:16 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:58.026 21:47:16 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:58.026 21:47:16 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:58.026 21:47:16 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:58.026 21:47:16 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:58.026 21:47:16 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:58.026 21:47:16 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:58.026 21:47:16 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:58.026 21:47:16 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:58.026 21:47:16 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:58.026 21:47:16 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:58.026 21:47:16 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:58.026 21:47:16 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:58.026 21:47:16 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:58.026 21:47:16 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:58.026 21:47:16 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:58.026 21:47:16 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:58.026 21:47:16 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:58.026 21:47:16 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:58.964 Creating new GPT entries in memory. 00:03:58.964 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:58.964 other utilities. 00:03:58.964 21:47:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:58.964 21:47:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:58.964 21:47:18 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:58.964 21:47:18 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:58.964 21:47:18 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:59.901 Creating new GPT entries in memory. 00:03:59.901 The operation has completed successfully. 00:03:59.901 21:47:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:59.901 21:47:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:59.901 21:47:19 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3919271 00:03:59.901 21:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:59.901 21:47:19 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:59.901 21:47:19 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:59.901 21:47:19 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:59.901 21:47:19 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:59.901 21:47:19 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:59.901 21:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:59.901 21:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:59.901 21:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:59.901 21:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:59.901 21:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:59.901 21:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:59.901 21:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:59.901 21:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:59.901 21:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:59.901 21:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.901 21:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:59.901 21:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:59.901 21:47:19 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.901 21:47:19 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:00.839 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.840 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:00.840 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.840 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:00.840 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.100 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:01.100 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:01.100 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.100 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:01.100 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:01.100 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:01.100 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.100 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.100 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:01.100 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:01.100 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:01.100 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:01.100 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:01.359 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:01.359 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:01.359 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:01.359 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:01.359 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:01.359 21:47:20 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:01.359 21:47:20 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.359 21:47:20 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:01.359 21:47:20 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:01.359 21:47:20 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.625 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:01.625 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:01.625 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:01.625 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:01.625 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:01.625 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:01.625 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:01.625 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:01.625 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:01.625 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.625 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:01.625 21:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:01.625 21:47:20 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.625 21:47:20 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:02.608 21:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:02.868 21:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:04:02.868 21:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:02.868 21:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:02.868 21:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:02.868 21:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:02.868 21:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:02.868 21:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:02.868 21:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:02.868 21:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.868 21:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:02.868 21:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:02.868 21:47:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.868 21:47:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:03.804 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.063 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:04.063 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:04.063 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:04.063 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:04.063 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.063 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:04.063 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:04.063 21:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:04.063 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:04.063 00:04:04.063 real 0m6.296s 00:04:04.063 user 0m1.453s 00:04:04.063 sys 0m2.398s 00:04:04.063 21:47:23 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.063 21:47:23 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:04.063 ************************************ 00:04:04.063 END TEST nvme_mount 00:04:04.063 ************************************ 00:04:04.063 21:47:23 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:04.063 21:47:23 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:04.063 21:47:23 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.063 21:47:23 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.063 21:47:23 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:04.063 ************************************ 00:04:04.063 START TEST dm_mount 00:04:04.063 ************************************ 00:04:04.063 21:47:23 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:04.063 21:47:23 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:04.063 21:47:23 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:04.063 21:47:23 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:04.063 21:47:23 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:04.063 21:47:23 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:04.063 21:47:23 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:04.063 21:47:23 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:04.063 21:47:23 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:04.063 21:47:23 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:04.063 21:47:23 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:04.063 21:47:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:04.063 21:47:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:04.063 21:47:23 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:04.063 21:47:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:04.063 21:47:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:04.063 21:47:23 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:04.063 21:47:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:04.063 21:47:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:04.063 21:47:23 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:04.063 21:47:23 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:04.063 21:47:23 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:04.999 Creating new GPT entries in memory. 00:04:04.999 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:04.999 other utilities. 00:04:04.999 21:47:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:04.999 21:47:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:04.999 21:47:24 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:04.999 21:47:24 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:04.999 21:47:24 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:06.376 Creating new GPT entries in memory. 00:04:06.376 The operation has completed successfully. 00:04:06.376 21:47:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:06.376 21:47:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:06.376 21:47:25 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:06.376 21:47:25 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:06.376 21:47:25 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:07.314 The operation has completed successfully. 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3921667 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.314 21:47:26 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.254 21:47:27 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:09.634 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:09.634 00:04:09.634 real 0m5.523s 00:04:09.634 user 0m0.889s 00:04:09.634 sys 0m1.493s 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.634 21:47:28 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:09.634 ************************************ 00:04:09.634 END TEST dm_mount 00:04:09.634 ************************************ 00:04:09.634 21:47:28 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:09.634 21:47:28 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:09.634 21:47:28 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:09.634 21:47:28 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:09.634 21:47:28 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:09.634 21:47:28 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:09.634 21:47:28 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:09.634 21:47:28 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:09.894 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:09.894 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:09.894 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:09.894 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:09.894 21:47:29 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:09.894 21:47:29 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:09.894 21:47:29 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:09.894 21:47:29 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:09.894 21:47:29 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:09.894 21:47:29 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:09.894 21:47:29 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:09.894 00:04:09.894 real 0m13.717s 00:04:09.894 user 0m2.969s 00:04:09.894 sys 0m4.922s 00:04:09.894 21:47:29 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.894 21:47:29 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:09.894 ************************************ 00:04:09.894 END TEST devices 00:04:09.894 ************************************ 00:04:09.894 21:47:29 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:09.894 00:04:09.894 real 0m43.136s 00:04:09.894 user 0m12.287s 00:04:09.894 sys 0m18.992s 00:04:09.894 21:47:29 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.894 21:47:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:09.894 ************************************ 00:04:09.894 END TEST setup.sh 00:04:09.894 ************************************ 00:04:09.894 21:47:29 -- common/autotest_common.sh@1142 -- # return 0 00:04:09.894 21:47:29 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:11.275 Hugepages 00:04:11.275 node hugesize free / total 00:04:11.275 node0 1048576kB 0 / 0 00:04:11.275 node0 2048kB 2048 / 2048 00:04:11.275 node1 1048576kB 0 / 0 00:04:11.275 node1 2048kB 0 / 0 00:04:11.275 00:04:11.275 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:11.275 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:11.275 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:11.275 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:11.275 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:11.275 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:11.275 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:11.275 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:11.275 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:11.275 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:11.275 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:11.275 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:11.275 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:11.275 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:11.275 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:11.275 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:11.275 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:11.275 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:11.275 21:47:30 -- spdk/autotest.sh@130 -- # uname -s 00:04:11.275 21:47:30 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:11.275 21:47:30 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:11.275 21:47:30 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:12.654 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:12.654 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:12.654 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:12.654 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:12.654 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:12.654 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:12.654 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:12.654 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:12.654 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:12.654 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:12.654 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:12.654 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:12.654 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:12.654 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:12.654 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:12.654 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:13.594 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:13.594 21:47:32 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:14.531 21:47:33 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:14.531 21:47:33 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:14.531 21:47:33 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:14.531 21:47:33 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:14.531 21:47:33 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:14.531 21:47:33 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:14.531 21:47:33 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:14.531 21:47:33 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:14.531 21:47:33 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:14.531 21:47:33 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:14.531 21:47:33 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:14.531 21:47:33 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:15.466 Waiting for block devices as requested 00:04:15.726 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:15.726 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:15.985 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:15.985 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:15.985 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:15.985 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:16.244 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:16.244 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:16.244 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:16.244 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:16.504 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:16.504 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:16.504 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:16.505 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:16.764 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:16.764 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:16.764 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:17.023 21:47:36 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:17.023 21:47:36 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:17.023 21:47:36 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:17.023 21:47:36 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:04:17.023 21:47:36 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:17.023 21:47:36 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:17.023 21:47:36 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:17.023 21:47:36 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:17.023 21:47:36 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:17.023 21:47:36 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:17.023 21:47:36 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:17.023 21:47:36 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:17.023 21:47:36 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:17.023 21:47:36 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:04:17.023 21:47:36 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:17.023 21:47:36 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:17.023 21:47:36 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:17.023 21:47:36 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:17.023 21:47:36 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:17.023 21:47:36 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:17.023 21:47:36 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:17.023 21:47:36 -- common/autotest_common.sh@1557 -- # continue 00:04:17.023 21:47:36 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:17.023 21:47:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:17.023 21:47:36 -- common/autotest_common.sh@10 -- # set +x 00:04:17.023 21:47:36 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:17.024 21:47:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:17.024 21:47:36 -- common/autotest_common.sh@10 -- # set +x 00:04:17.024 21:47:36 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:17.994 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:17.994 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:17.994 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:17.994 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:17.994 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:17.994 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:17.994 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:17.994 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:17.994 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:17.994 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:18.253 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:18.253 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:18.253 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:18.253 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:18.253 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:18.253 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:19.194 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:19.194 21:47:38 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:19.194 21:47:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:19.194 21:47:38 -- common/autotest_common.sh@10 -- # set +x 00:04:19.194 21:47:38 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:19.194 21:47:38 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:19.194 21:47:38 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:19.194 21:47:38 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:19.194 21:47:38 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:19.194 21:47:38 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:19.194 21:47:38 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:19.194 21:47:38 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:19.194 21:47:38 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:19.194 21:47:38 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:19.194 21:47:38 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:19.452 21:47:38 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:19.452 21:47:38 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:19.452 21:47:38 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:19.452 21:47:38 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:19.452 21:47:38 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:19.452 21:47:38 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:19.452 21:47:38 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:19.452 21:47:38 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:04:19.452 21:47:38 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:04:19.452 21:47:38 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=3926834 00:04:19.452 21:47:38 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.452 21:47:38 -- common/autotest_common.sh@1598 -- # waitforlisten 3926834 00:04:19.452 21:47:38 -- common/autotest_common.sh@829 -- # '[' -z 3926834 ']' 00:04:19.452 21:47:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.452 21:47:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:19.452 21:47:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.452 21:47:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:19.452 21:47:38 -- common/autotest_common.sh@10 -- # set +x 00:04:19.452 [2024-07-13 21:47:38.697683] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:19.452 [2024-07-13 21:47:38.697834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3926834 ] 00:04:19.452 EAL: No free 2048 kB hugepages reported on node 1 00:04:19.452 [2024-07-13 21:47:38.823770] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.712 [2024-07-13 21:47:39.072701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.647 21:47:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:20.647 21:47:39 -- common/autotest_common.sh@862 -- # return 0 00:04:20.647 21:47:39 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:20.647 21:47:39 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:20.647 21:47:39 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:23.937 nvme0n1 00:04:23.937 21:47:43 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:23.937 [2024-07-13 21:47:43.303824] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:23.937 [2024-07-13 21:47:43.303917] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:23.937 request: 00:04:23.937 { 00:04:23.937 "nvme_ctrlr_name": "nvme0", 00:04:23.937 "password": "test", 00:04:23.937 "method": "bdev_nvme_opal_revert", 00:04:23.937 "req_id": 1 00:04:23.937 } 00:04:23.937 Got JSON-RPC error response 00:04:23.937 response: 00:04:23.937 { 00:04:23.937 "code": -32603, 00:04:23.937 "message": "Internal error" 00:04:23.937 } 00:04:23.937 21:47:43 -- common/autotest_common.sh@1604 -- # true 00:04:23.937 21:47:43 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:23.937 21:47:43 -- common/autotest_common.sh@1608 -- # killprocess 3926834 00:04:23.937 21:47:43 -- common/autotest_common.sh@948 -- # '[' -z 3926834 ']' 00:04:23.937 21:47:43 -- common/autotest_common.sh@952 -- # kill -0 3926834 00:04:23.937 21:47:43 -- common/autotest_common.sh@953 -- # uname 00:04:23.937 21:47:43 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:23.937 21:47:43 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3926834 00:04:24.196 21:47:43 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:24.196 21:47:43 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:24.196 21:47:43 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3926834' 00:04:24.196 killing process with pid 3926834 00:04:24.196 21:47:43 -- common/autotest_common.sh@967 -- # kill 3926834 00:04:24.196 21:47:43 -- common/autotest_common.sh@972 -- # wait 3926834 00:04:28.388 21:47:47 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:28.388 21:47:47 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:28.388 21:47:47 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:28.388 21:47:47 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:28.388 21:47:47 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:28.388 21:47:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:28.388 21:47:47 -- common/autotest_common.sh@10 -- # set +x 00:04:28.388 21:47:47 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:28.389 21:47:47 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:28.389 21:47:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.389 21:47:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.389 21:47:47 -- common/autotest_common.sh@10 -- # set +x 00:04:28.389 ************************************ 00:04:28.389 START TEST env 00:04:28.389 ************************************ 00:04:28.389 21:47:47 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:28.389 * Looking for test storage... 00:04:28.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:28.389 21:47:47 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:28.389 21:47:47 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.389 21:47:47 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.389 21:47:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.389 ************************************ 00:04:28.389 START TEST env_memory 00:04:28.389 ************************************ 00:04:28.389 21:47:47 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:28.389 00:04:28.389 00:04:28.389 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.389 http://cunit.sourceforge.net/ 00:04:28.389 00:04:28.389 00:04:28.389 Suite: memory 00:04:28.389 Test: alloc and free memory map ...[2024-07-13 21:47:47.218145] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:28.389 passed 00:04:28.389 Test: mem map translation ...[2024-07-13 21:47:47.258803] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:28.389 [2024-07-13 21:47:47.258843] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:28.389 [2024-07-13 21:47:47.258925] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:28.389 [2024-07-13 21:47:47.258955] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:28.389 passed 00:04:28.389 Test: mem map registration ...[2024-07-13 21:47:47.322907] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:28.389 [2024-07-13 21:47:47.322942] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:28.389 passed 00:04:28.389 Test: mem map adjacent registrations ...passed 00:04:28.389 00:04:28.389 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.389 suites 1 1 n/a 0 0 00:04:28.389 tests 4 4 4 0 0 00:04:28.389 asserts 152 152 152 0 n/a 00:04:28.389 00:04:28.389 Elapsed time = 0.228 seconds 00:04:28.389 00:04:28.389 real 0m0.249s 00:04:28.389 user 0m0.233s 00:04:28.389 sys 0m0.015s 00:04:28.389 21:47:47 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.389 21:47:47 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:28.389 ************************************ 00:04:28.389 END TEST env_memory 00:04:28.389 ************************************ 00:04:28.389 21:47:47 env -- common/autotest_common.sh@1142 -- # return 0 00:04:28.389 21:47:47 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:28.389 21:47:47 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.389 21:47:47 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.389 21:47:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.389 ************************************ 00:04:28.389 START TEST env_vtophys 00:04:28.389 ************************************ 00:04:28.389 21:47:47 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:28.389 EAL: lib.eal log level changed from notice to debug 00:04:28.389 EAL: Detected lcore 0 as core 0 on socket 0 00:04:28.389 EAL: Detected lcore 1 as core 1 on socket 0 00:04:28.389 EAL: Detected lcore 2 as core 2 on socket 0 00:04:28.389 EAL: Detected lcore 3 as core 3 on socket 0 00:04:28.389 EAL: Detected lcore 4 as core 4 on socket 0 00:04:28.389 EAL: Detected lcore 5 as core 5 on socket 0 00:04:28.389 EAL: Detected lcore 6 as core 8 on socket 0 00:04:28.389 EAL: Detected lcore 7 as core 9 on socket 0 00:04:28.389 EAL: Detected lcore 8 as core 10 on socket 0 00:04:28.389 EAL: Detected lcore 9 as core 11 on socket 0 00:04:28.389 EAL: Detected lcore 10 as core 12 on socket 0 00:04:28.389 EAL: Detected lcore 11 as core 13 on socket 0 00:04:28.389 EAL: Detected lcore 12 as core 0 on socket 1 00:04:28.389 EAL: Detected lcore 13 as core 1 on socket 1 00:04:28.389 EAL: Detected lcore 14 as core 2 on socket 1 00:04:28.389 EAL: Detected lcore 15 as core 3 on socket 1 00:04:28.389 EAL: Detected lcore 16 as core 4 on socket 1 00:04:28.389 EAL: Detected lcore 17 as core 5 on socket 1 00:04:28.389 EAL: Detected lcore 18 as core 8 on socket 1 00:04:28.389 EAL: Detected lcore 19 as core 9 on socket 1 00:04:28.389 EAL: Detected lcore 20 as core 10 on socket 1 00:04:28.389 EAL: Detected lcore 21 as core 11 on socket 1 00:04:28.389 EAL: Detected lcore 22 as core 12 on socket 1 00:04:28.389 EAL: Detected lcore 23 as core 13 on socket 1 00:04:28.389 EAL: Detected lcore 24 as core 0 on socket 0 00:04:28.389 EAL: Detected lcore 25 as core 1 on socket 0 00:04:28.389 EAL: Detected lcore 26 as core 2 on socket 0 00:04:28.389 EAL: Detected lcore 27 as core 3 on socket 0 00:04:28.389 EAL: Detected lcore 28 as core 4 on socket 0 00:04:28.389 EAL: Detected lcore 29 as core 5 on socket 0 00:04:28.389 EAL: Detected lcore 30 as core 8 on socket 0 00:04:28.389 EAL: Detected lcore 31 as core 9 on socket 0 00:04:28.389 EAL: Detected lcore 32 as core 10 on socket 0 00:04:28.389 EAL: Detected lcore 33 as core 11 on socket 0 00:04:28.389 EAL: Detected lcore 34 as core 12 on socket 0 00:04:28.389 EAL: Detected lcore 35 as core 13 on socket 0 00:04:28.389 EAL: Detected lcore 36 as core 0 on socket 1 00:04:28.389 EAL: Detected lcore 37 as core 1 on socket 1 00:04:28.389 EAL: Detected lcore 38 as core 2 on socket 1 00:04:28.389 EAL: Detected lcore 39 as core 3 on socket 1 00:04:28.389 EAL: Detected lcore 40 as core 4 on socket 1 00:04:28.389 EAL: Detected lcore 41 as core 5 on socket 1 00:04:28.389 EAL: Detected lcore 42 as core 8 on socket 1 00:04:28.389 EAL: Detected lcore 43 as core 9 on socket 1 00:04:28.389 EAL: Detected lcore 44 as core 10 on socket 1 00:04:28.389 EAL: Detected lcore 45 as core 11 on socket 1 00:04:28.389 EAL: Detected lcore 46 as core 12 on socket 1 00:04:28.389 EAL: Detected lcore 47 as core 13 on socket 1 00:04:28.389 EAL: Maximum logical cores by configuration: 128 00:04:28.389 EAL: Detected CPU lcores: 48 00:04:28.389 EAL: Detected NUMA nodes: 2 00:04:28.389 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:28.389 EAL: Detected shared linkage of DPDK 00:04:28.389 EAL: No shared files mode enabled, IPC will be disabled 00:04:28.389 EAL: Bus pci wants IOVA as 'DC' 00:04:28.389 EAL: Buses did not request a specific IOVA mode. 00:04:28.389 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:28.389 EAL: Selected IOVA mode 'VA' 00:04:28.389 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.389 EAL: Probing VFIO support... 00:04:28.389 EAL: IOMMU type 1 (Type 1) is supported 00:04:28.389 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:28.389 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:28.389 EAL: VFIO support initialized 00:04:28.389 EAL: Ask a virtual area of 0x2e000 bytes 00:04:28.389 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:28.389 EAL: Setting up physically contiguous memory... 00:04:28.389 EAL: Setting maximum number of open files to 524288 00:04:28.389 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:28.389 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:28.389 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:28.389 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.389 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:28.389 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.389 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.389 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:28.389 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:28.389 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.389 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:28.389 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.389 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.389 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:28.389 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:28.389 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.389 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:28.389 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.389 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.389 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:28.389 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:28.389 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.389 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:28.389 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.389 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.389 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:28.389 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:28.389 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:28.389 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.389 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:28.389 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:28.389 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.389 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:28.389 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:28.389 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.389 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:28.389 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:28.389 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.389 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:28.389 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:28.389 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.389 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:28.389 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:28.389 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.389 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:28.389 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:28.389 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.390 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:28.390 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:28.390 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.390 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:28.390 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:28.390 EAL: Hugepages will be freed exactly as allocated. 00:04:28.390 EAL: No shared files mode enabled, IPC is disabled 00:04:28.390 EAL: No shared files mode enabled, IPC is disabled 00:04:28.390 EAL: TSC frequency is ~2700000 KHz 00:04:28.390 EAL: Main lcore 0 is ready (tid=7fc10f50ea40;cpuset=[0]) 00:04:28.390 EAL: Trying to obtain current memory policy. 00:04:28.390 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.390 EAL: Restoring previous memory policy: 0 00:04:28.390 EAL: request: mp_malloc_sync 00:04:28.390 EAL: No shared files mode enabled, IPC is disabled 00:04:28.390 EAL: Heap on socket 0 was expanded by 2MB 00:04:28.390 EAL: No shared files mode enabled, IPC is disabled 00:04:28.390 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:28.390 EAL: Mem event callback 'spdk:(nil)' registered 00:04:28.390 00:04:28.390 00:04:28.390 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.390 http://cunit.sourceforge.net/ 00:04:28.390 00:04:28.390 00:04:28.390 Suite: components_suite 00:04:28.649 Test: vtophys_malloc_test ...passed 00:04:28.649 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:28.649 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.649 EAL: Restoring previous memory policy: 4 00:04:28.649 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.649 EAL: request: mp_malloc_sync 00:04:28.649 EAL: No shared files mode enabled, IPC is disabled 00:04:28.649 EAL: Heap on socket 0 was expanded by 4MB 00:04:28.909 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.909 EAL: request: mp_malloc_sync 00:04:28.909 EAL: No shared files mode enabled, IPC is disabled 00:04:28.909 EAL: Heap on socket 0 was shrunk by 4MB 00:04:28.909 EAL: Trying to obtain current memory policy. 00:04:28.909 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.909 EAL: Restoring previous memory policy: 4 00:04:28.909 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.909 EAL: request: mp_malloc_sync 00:04:28.909 EAL: No shared files mode enabled, IPC is disabled 00:04:28.909 EAL: Heap on socket 0 was expanded by 6MB 00:04:28.909 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.909 EAL: request: mp_malloc_sync 00:04:28.909 EAL: No shared files mode enabled, IPC is disabled 00:04:28.909 EAL: Heap on socket 0 was shrunk by 6MB 00:04:28.909 EAL: Trying to obtain current memory policy. 00:04:28.909 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.909 EAL: Restoring previous memory policy: 4 00:04:28.909 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.909 EAL: request: mp_malloc_sync 00:04:28.909 EAL: No shared files mode enabled, IPC is disabled 00:04:28.909 EAL: Heap on socket 0 was expanded by 10MB 00:04:28.909 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.909 EAL: request: mp_malloc_sync 00:04:28.909 EAL: No shared files mode enabled, IPC is disabled 00:04:28.909 EAL: Heap on socket 0 was shrunk by 10MB 00:04:28.909 EAL: Trying to obtain current memory policy. 00:04:28.909 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.909 EAL: Restoring previous memory policy: 4 00:04:28.909 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.909 EAL: request: mp_malloc_sync 00:04:28.909 EAL: No shared files mode enabled, IPC is disabled 00:04:28.909 EAL: Heap on socket 0 was expanded by 18MB 00:04:28.909 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.909 EAL: request: mp_malloc_sync 00:04:28.909 EAL: No shared files mode enabled, IPC is disabled 00:04:28.909 EAL: Heap on socket 0 was shrunk by 18MB 00:04:28.909 EAL: Trying to obtain current memory policy. 00:04:28.909 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.909 EAL: Restoring previous memory policy: 4 00:04:28.909 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.909 EAL: request: mp_malloc_sync 00:04:28.909 EAL: No shared files mode enabled, IPC is disabled 00:04:28.909 EAL: Heap on socket 0 was expanded by 34MB 00:04:28.909 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.909 EAL: request: mp_malloc_sync 00:04:28.909 EAL: No shared files mode enabled, IPC is disabled 00:04:28.909 EAL: Heap on socket 0 was shrunk by 34MB 00:04:28.909 EAL: Trying to obtain current memory policy. 00:04:28.909 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.909 EAL: Restoring previous memory policy: 4 00:04:28.909 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.909 EAL: request: mp_malloc_sync 00:04:28.909 EAL: No shared files mode enabled, IPC is disabled 00:04:28.909 EAL: Heap on socket 0 was expanded by 66MB 00:04:29.169 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.169 EAL: request: mp_malloc_sync 00:04:29.169 EAL: No shared files mode enabled, IPC is disabled 00:04:29.169 EAL: Heap on socket 0 was shrunk by 66MB 00:04:29.169 EAL: Trying to obtain current memory policy. 00:04:29.169 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.169 EAL: Restoring previous memory policy: 4 00:04:29.169 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.169 EAL: request: mp_malloc_sync 00:04:29.169 EAL: No shared files mode enabled, IPC is disabled 00:04:29.169 EAL: Heap on socket 0 was expanded by 130MB 00:04:29.429 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.687 EAL: request: mp_malloc_sync 00:04:29.687 EAL: No shared files mode enabled, IPC is disabled 00:04:29.687 EAL: Heap on socket 0 was shrunk by 130MB 00:04:29.687 EAL: Trying to obtain current memory policy. 00:04:29.687 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.947 EAL: Restoring previous memory policy: 4 00:04:29.947 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.947 EAL: request: mp_malloc_sync 00:04:29.947 EAL: No shared files mode enabled, IPC is disabled 00:04:29.947 EAL: Heap on socket 0 was expanded by 258MB 00:04:30.205 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.463 EAL: request: mp_malloc_sync 00:04:30.463 EAL: No shared files mode enabled, IPC is disabled 00:04:30.463 EAL: Heap on socket 0 was shrunk by 258MB 00:04:30.723 EAL: Trying to obtain current memory policy. 00:04:30.723 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.982 EAL: Restoring previous memory policy: 4 00:04:30.982 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.982 EAL: request: mp_malloc_sync 00:04:30.982 EAL: No shared files mode enabled, IPC is disabled 00:04:30.982 EAL: Heap on socket 0 was expanded by 514MB 00:04:31.918 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.918 EAL: request: mp_malloc_sync 00:04:31.918 EAL: No shared files mode enabled, IPC is disabled 00:04:31.918 EAL: Heap on socket 0 was shrunk by 514MB 00:04:32.857 EAL: Trying to obtain current memory policy. 00:04:32.857 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.115 EAL: Restoring previous memory policy: 4 00:04:33.115 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.115 EAL: request: mp_malloc_sync 00:04:33.115 EAL: No shared files mode enabled, IPC is disabled 00:04:33.115 EAL: Heap on socket 0 was expanded by 1026MB 00:04:35.033 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.326 EAL: request: mp_malloc_sync 00:04:35.326 EAL: No shared files mode enabled, IPC is disabled 00:04:35.326 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:36.706 passed 00:04:36.706 00:04:36.706 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.706 suites 1 1 n/a 0 0 00:04:36.706 tests 2 2 2 0 0 00:04:36.706 asserts 497 497 497 0 n/a 00:04:36.706 00:04:36.706 Elapsed time = 8.320 seconds 00:04:36.706 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.706 EAL: request: mp_malloc_sync 00:04:36.706 EAL: No shared files mode enabled, IPC is disabled 00:04:36.706 EAL: Heap on socket 0 was shrunk by 2MB 00:04:36.706 EAL: No shared files mode enabled, IPC is disabled 00:04:36.706 EAL: No shared files mode enabled, IPC is disabled 00:04:36.706 EAL: No shared files mode enabled, IPC is disabled 00:04:36.706 00:04:36.706 real 0m8.590s 00:04:36.706 user 0m7.461s 00:04:36.706 sys 0m1.072s 00:04:36.706 21:47:56 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.706 21:47:56 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:36.706 ************************************ 00:04:36.706 END TEST env_vtophys 00:04:36.706 ************************************ 00:04:36.706 21:47:56 env -- common/autotest_common.sh@1142 -- # return 0 00:04:36.706 21:47:56 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:36.706 21:47:56 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.706 21:47:56 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.706 21:47:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.965 ************************************ 00:04:36.965 START TEST env_pci 00:04:36.965 ************************************ 00:04:36.965 21:47:56 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:36.965 00:04:36.965 00:04:36.965 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.965 http://cunit.sourceforge.net/ 00:04:36.965 00:04:36.965 00:04:36.965 Suite: pci 00:04:36.965 Test: pci_hook ...[2024-07-13 21:47:56.128948] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3928925 has claimed it 00:04:36.965 EAL: Cannot find device (10000:00:01.0) 00:04:36.965 EAL: Failed to attach device on primary process 00:04:36.965 passed 00:04:36.965 00:04:36.965 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.965 suites 1 1 n/a 0 0 00:04:36.965 tests 1 1 1 0 0 00:04:36.965 asserts 25 25 25 0 n/a 00:04:36.965 00:04:36.965 Elapsed time = 0.041 seconds 00:04:36.965 00:04:36.965 real 0m0.089s 00:04:36.965 user 0m0.037s 00:04:36.965 sys 0m0.052s 00:04:36.965 21:47:56 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.965 21:47:56 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:36.965 ************************************ 00:04:36.965 END TEST env_pci 00:04:36.965 ************************************ 00:04:36.965 21:47:56 env -- common/autotest_common.sh@1142 -- # return 0 00:04:36.965 21:47:56 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:36.965 21:47:56 env -- env/env.sh@15 -- # uname 00:04:36.965 21:47:56 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:36.965 21:47:56 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:36.965 21:47:56 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:36.965 21:47:56 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:36.965 21:47:56 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.965 21:47:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.965 ************************************ 00:04:36.965 START TEST env_dpdk_post_init 00:04:36.965 ************************************ 00:04:36.965 21:47:56 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:36.965 EAL: Detected CPU lcores: 48 00:04:36.965 EAL: Detected NUMA nodes: 2 00:04:36.965 EAL: Detected shared linkage of DPDK 00:04:36.965 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:36.965 EAL: Selected IOVA mode 'VA' 00:04:36.965 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.225 EAL: VFIO support initialized 00:04:37.225 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:37.225 EAL: Using IOMMU type 1 (Type 1) 00:04:37.225 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:37.225 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:37.225 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:37.225 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:37.225 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:37.225 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:37.225 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:37.225 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:37.225 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:37.225 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:37.225 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:37.486 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:37.486 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:37.486 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:37.486 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:37.486 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:38.056 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:41.342 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:41.342 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:41.602 Starting DPDK initialization... 00:04:41.602 Starting SPDK post initialization... 00:04:41.602 SPDK NVMe probe 00:04:41.602 Attaching to 0000:88:00.0 00:04:41.602 Attached to 0000:88:00.0 00:04:41.602 Cleaning up... 00:04:41.602 00:04:41.602 real 0m4.555s 00:04:41.602 user 0m3.363s 00:04:41.602 sys 0m0.250s 00:04:41.602 21:48:00 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.602 21:48:00 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.602 ************************************ 00:04:41.602 END TEST env_dpdk_post_init 00:04:41.602 ************************************ 00:04:41.602 21:48:00 env -- common/autotest_common.sh@1142 -- # return 0 00:04:41.602 21:48:00 env -- env/env.sh@26 -- # uname 00:04:41.602 21:48:00 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:41.602 21:48:00 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:41.602 21:48:00 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.602 21:48:00 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.602 21:48:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.602 ************************************ 00:04:41.602 START TEST env_mem_callbacks 00:04:41.602 ************************************ 00:04:41.602 21:48:00 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:41.602 EAL: Detected CPU lcores: 48 00:04:41.602 EAL: Detected NUMA nodes: 2 00:04:41.602 EAL: Detected shared linkage of DPDK 00:04:41.602 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:41.602 EAL: Selected IOVA mode 'VA' 00:04:41.602 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.602 EAL: VFIO support initialized 00:04:41.602 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:41.602 00:04:41.602 00:04:41.602 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.602 http://cunit.sourceforge.net/ 00:04:41.602 00:04:41.602 00:04:41.602 Suite: memory 00:04:41.602 Test: test ... 00:04:41.602 register 0x200000200000 2097152 00:04:41.602 malloc 3145728 00:04:41.602 register 0x200000400000 4194304 00:04:41.602 buf 0x2000004fffc0 len 3145728 PASSED 00:04:41.602 malloc 64 00:04:41.602 buf 0x2000004ffec0 len 64 PASSED 00:04:41.602 malloc 4194304 00:04:41.602 register 0x200000800000 6291456 00:04:41.602 buf 0x2000009fffc0 len 4194304 PASSED 00:04:41.602 free 0x2000004fffc0 3145728 00:04:41.602 free 0x2000004ffec0 64 00:04:41.602 unregister 0x200000400000 4194304 PASSED 00:04:41.602 free 0x2000009fffc0 4194304 00:04:41.602 unregister 0x200000800000 6291456 PASSED 00:04:41.602 malloc 8388608 00:04:41.602 register 0x200000400000 10485760 00:04:41.602 buf 0x2000005fffc0 len 8388608 PASSED 00:04:41.602 free 0x2000005fffc0 8388608 00:04:41.861 unregister 0x200000400000 10485760 PASSED 00:04:41.861 passed 00:04:41.861 00:04:41.861 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.861 suites 1 1 n/a 0 0 00:04:41.861 tests 1 1 1 0 0 00:04:41.861 asserts 15 15 15 0 n/a 00:04:41.861 00:04:41.861 Elapsed time = 0.060 seconds 00:04:41.861 00:04:41.861 real 0m0.183s 00:04:41.861 user 0m0.098s 00:04:41.861 sys 0m0.084s 00:04:41.861 21:48:01 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.861 21:48:01 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:41.861 ************************************ 00:04:41.861 END TEST env_mem_callbacks 00:04:41.861 ************************************ 00:04:41.861 21:48:01 env -- common/autotest_common.sh@1142 -- # return 0 00:04:41.861 00:04:41.861 real 0m13.961s 00:04:41.861 user 0m11.310s 00:04:41.861 sys 0m1.669s 00:04:41.861 21:48:01 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.861 21:48:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.861 ************************************ 00:04:41.861 END TEST env 00:04:41.861 ************************************ 00:04:41.861 21:48:01 -- common/autotest_common.sh@1142 -- # return 0 00:04:41.861 21:48:01 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:41.861 21:48:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.861 21:48:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.861 21:48:01 -- common/autotest_common.sh@10 -- # set +x 00:04:41.861 ************************************ 00:04:41.861 START TEST rpc 00:04:41.861 ************************************ 00:04:41.861 21:48:01 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:41.861 * Looking for test storage... 00:04:41.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:41.861 21:48:01 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3929780 00:04:41.861 21:48:01 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:41.861 21:48:01 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.861 21:48:01 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3929780 00:04:41.861 21:48:01 rpc -- common/autotest_common.sh@829 -- # '[' -z 3929780 ']' 00:04:41.861 21:48:01 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.861 21:48:01 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.861 21:48:01 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.861 21:48:01 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.861 21:48:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.861 [2024-07-13 21:48:01.230921] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:41.861 [2024-07-13 21:48:01.231082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929780 ] 00:04:42.120 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.120 [2024-07-13 21:48:01.356739] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.379 [2024-07-13 21:48:01.609541] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:42.379 [2024-07-13 21:48:01.609619] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3929780' to capture a snapshot of events at runtime. 00:04:42.379 [2024-07-13 21:48:01.609645] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:42.379 [2024-07-13 21:48:01.609673] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:42.379 [2024-07-13 21:48:01.609692] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3929780 for offline analysis/debug. 00:04:42.379 [2024-07-13 21:48:01.609745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.319 21:48:02 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.319 21:48:02 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:43.319 21:48:02 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:43.319 21:48:02 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:43.319 21:48:02 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:43.319 21:48:02 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:43.319 21:48:02 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.319 21:48:02 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.319 21:48:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.319 ************************************ 00:04:43.319 START TEST rpc_integrity 00:04:43.319 ************************************ 00:04:43.319 21:48:02 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:43.319 21:48:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:43.319 21:48:02 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.319 21:48:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.319 21:48:02 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.319 21:48:02 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:43.319 21:48:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:43.319 21:48:02 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:43.319 21:48:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:43.319 21:48:02 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.319 21:48:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.319 21:48:02 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.319 21:48:02 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:43.319 21:48:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:43.319 21:48:02 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.319 21:48:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.319 21:48:02 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.319 21:48:02 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:43.319 { 00:04:43.319 "name": "Malloc0", 00:04:43.319 "aliases": [ 00:04:43.319 "5048b416-4a34-4768-8519-ac0886827cdf" 00:04:43.319 ], 00:04:43.319 "product_name": "Malloc disk", 00:04:43.319 "block_size": 512, 00:04:43.319 "num_blocks": 16384, 00:04:43.319 "uuid": "5048b416-4a34-4768-8519-ac0886827cdf", 00:04:43.319 "assigned_rate_limits": { 00:04:43.319 "rw_ios_per_sec": 0, 00:04:43.319 "rw_mbytes_per_sec": 0, 00:04:43.319 "r_mbytes_per_sec": 0, 00:04:43.319 "w_mbytes_per_sec": 0 00:04:43.319 }, 00:04:43.319 "claimed": false, 00:04:43.319 "zoned": false, 00:04:43.319 "supported_io_types": { 00:04:43.319 "read": true, 00:04:43.319 "write": true, 00:04:43.319 "unmap": true, 00:04:43.319 "flush": true, 00:04:43.319 "reset": true, 00:04:43.319 "nvme_admin": false, 00:04:43.319 "nvme_io": false, 00:04:43.319 "nvme_io_md": false, 00:04:43.319 "write_zeroes": true, 00:04:43.319 "zcopy": true, 00:04:43.319 "get_zone_info": false, 00:04:43.319 "zone_management": false, 00:04:43.319 "zone_append": false, 00:04:43.319 "compare": false, 00:04:43.319 "compare_and_write": false, 00:04:43.319 "abort": true, 00:04:43.319 "seek_hole": false, 00:04:43.319 "seek_data": false, 00:04:43.319 "copy": true, 00:04:43.319 "nvme_iov_md": false 00:04:43.319 }, 00:04:43.319 "memory_domains": [ 00:04:43.319 { 00:04:43.319 "dma_device_id": "system", 00:04:43.319 "dma_device_type": 1 00:04:43.319 }, 00:04:43.319 { 00:04:43.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.319 "dma_device_type": 2 00:04:43.319 } 00:04:43.319 ], 00:04:43.319 "driver_specific": {} 00:04:43.319 } 00:04:43.319 ]' 00:04:43.320 21:48:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:43.320 21:48:02 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:43.320 21:48:02 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:43.320 21:48:02 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.320 21:48:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.320 [2024-07-13 21:48:02.624419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:43.320 [2024-07-13 21:48:02.624497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:43.320 [2024-07-13 21:48:02.624543] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022880 00:04:43.320 [2024-07-13 21:48:02.624572] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:43.320 [2024-07-13 21:48:02.627287] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:43.320 [2024-07-13 21:48:02.627335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:43.320 Passthru0 00:04:43.320 21:48:02 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.320 21:48:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:43.320 21:48:02 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.320 21:48:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.320 21:48:02 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.320 21:48:02 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:43.320 { 00:04:43.320 "name": "Malloc0", 00:04:43.320 "aliases": [ 00:04:43.320 "5048b416-4a34-4768-8519-ac0886827cdf" 00:04:43.320 ], 00:04:43.320 "product_name": "Malloc disk", 00:04:43.320 "block_size": 512, 00:04:43.320 "num_blocks": 16384, 00:04:43.320 "uuid": "5048b416-4a34-4768-8519-ac0886827cdf", 00:04:43.320 "assigned_rate_limits": { 00:04:43.320 "rw_ios_per_sec": 0, 00:04:43.320 "rw_mbytes_per_sec": 0, 00:04:43.320 "r_mbytes_per_sec": 0, 00:04:43.320 "w_mbytes_per_sec": 0 00:04:43.320 }, 00:04:43.320 "claimed": true, 00:04:43.320 "claim_type": "exclusive_write", 00:04:43.320 "zoned": false, 00:04:43.320 "supported_io_types": { 00:04:43.320 "read": true, 00:04:43.320 "write": true, 00:04:43.320 "unmap": true, 00:04:43.320 "flush": true, 00:04:43.320 "reset": true, 00:04:43.320 "nvme_admin": false, 00:04:43.320 "nvme_io": false, 00:04:43.320 "nvme_io_md": false, 00:04:43.320 "write_zeroes": true, 00:04:43.320 "zcopy": true, 00:04:43.320 "get_zone_info": false, 00:04:43.320 "zone_management": false, 00:04:43.320 "zone_append": false, 00:04:43.320 "compare": false, 00:04:43.320 "compare_and_write": false, 00:04:43.320 "abort": true, 00:04:43.320 "seek_hole": false, 00:04:43.320 "seek_data": false, 00:04:43.320 "copy": true, 00:04:43.320 "nvme_iov_md": false 00:04:43.320 }, 00:04:43.320 "memory_domains": [ 00:04:43.320 { 00:04:43.320 "dma_device_id": "system", 00:04:43.320 "dma_device_type": 1 00:04:43.320 }, 00:04:43.320 { 00:04:43.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.320 "dma_device_type": 2 00:04:43.320 } 00:04:43.320 ], 00:04:43.320 "driver_specific": {} 00:04:43.320 }, 00:04:43.320 { 00:04:43.320 "name": "Passthru0", 00:04:43.320 "aliases": [ 00:04:43.320 "f36cbd52-8bdb-54cd-a695-15377b64813e" 00:04:43.320 ], 00:04:43.320 "product_name": "passthru", 00:04:43.320 "block_size": 512, 00:04:43.320 "num_blocks": 16384, 00:04:43.320 "uuid": "f36cbd52-8bdb-54cd-a695-15377b64813e", 00:04:43.320 "assigned_rate_limits": { 00:04:43.320 "rw_ios_per_sec": 0, 00:04:43.320 "rw_mbytes_per_sec": 0, 00:04:43.320 "r_mbytes_per_sec": 0, 00:04:43.320 "w_mbytes_per_sec": 0 00:04:43.320 }, 00:04:43.320 "claimed": false, 00:04:43.320 "zoned": false, 00:04:43.320 "supported_io_types": { 00:04:43.320 "read": true, 00:04:43.320 "write": true, 00:04:43.320 "unmap": true, 00:04:43.320 "flush": true, 00:04:43.320 "reset": true, 00:04:43.320 "nvme_admin": false, 00:04:43.320 "nvme_io": false, 00:04:43.320 "nvme_io_md": false, 00:04:43.320 "write_zeroes": true, 00:04:43.320 "zcopy": true, 00:04:43.320 "get_zone_info": false, 00:04:43.320 "zone_management": false, 00:04:43.320 "zone_append": false, 00:04:43.320 "compare": false, 00:04:43.320 "compare_and_write": false, 00:04:43.320 "abort": true, 00:04:43.320 "seek_hole": false, 00:04:43.320 "seek_data": false, 00:04:43.320 "copy": true, 00:04:43.320 "nvme_iov_md": false 00:04:43.320 }, 00:04:43.320 "memory_domains": [ 00:04:43.320 { 00:04:43.320 "dma_device_id": "system", 00:04:43.320 "dma_device_type": 1 00:04:43.320 }, 00:04:43.320 { 00:04:43.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.320 "dma_device_type": 2 00:04:43.320 } 00:04:43.320 ], 00:04:43.320 "driver_specific": { 00:04:43.320 "passthru": { 00:04:43.320 "name": "Passthru0", 00:04:43.320 "base_bdev_name": "Malloc0" 00:04:43.320 } 00:04:43.320 } 00:04:43.320 } 00:04:43.320 ]' 00:04:43.320 21:48:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:43.320 21:48:02 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:43.320 21:48:02 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:43.320 21:48:02 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.320 21:48:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.320 21:48:02 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.320 21:48:02 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:43.320 21:48:02 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.320 21:48:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.580 21:48:02 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.580 21:48:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:43.580 21:48:02 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.580 21:48:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.580 21:48:02 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.580 21:48:02 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:43.581 21:48:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:43.581 21:48:02 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:43.581 00:04:43.581 real 0m0.257s 00:04:43.581 user 0m0.145s 00:04:43.581 sys 0m0.024s 00:04:43.581 21:48:02 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.581 21:48:02 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.581 ************************************ 00:04:43.581 END TEST rpc_integrity 00:04:43.581 ************************************ 00:04:43.581 21:48:02 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:43.581 21:48:02 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:43.581 21:48:02 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.581 21:48:02 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.581 21:48:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.581 ************************************ 00:04:43.581 START TEST rpc_plugins 00:04:43.581 ************************************ 00:04:43.581 21:48:02 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:43.581 21:48:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:43.581 21:48:02 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.581 21:48:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.581 21:48:02 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.581 21:48:02 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:43.581 21:48:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:43.581 21:48:02 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.581 21:48:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.581 21:48:02 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.581 21:48:02 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:43.581 { 00:04:43.581 "name": "Malloc1", 00:04:43.581 "aliases": [ 00:04:43.581 "63fac1bd-84be-47df-bbae-a8be25a030b1" 00:04:43.581 ], 00:04:43.581 "product_name": "Malloc disk", 00:04:43.581 "block_size": 4096, 00:04:43.581 "num_blocks": 256, 00:04:43.581 "uuid": "63fac1bd-84be-47df-bbae-a8be25a030b1", 00:04:43.581 "assigned_rate_limits": { 00:04:43.581 "rw_ios_per_sec": 0, 00:04:43.581 "rw_mbytes_per_sec": 0, 00:04:43.581 "r_mbytes_per_sec": 0, 00:04:43.581 "w_mbytes_per_sec": 0 00:04:43.581 }, 00:04:43.581 "claimed": false, 00:04:43.581 "zoned": false, 00:04:43.581 "supported_io_types": { 00:04:43.581 "read": true, 00:04:43.581 "write": true, 00:04:43.581 "unmap": true, 00:04:43.581 "flush": true, 00:04:43.581 "reset": true, 00:04:43.581 "nvme_admin": false, 00:04:43.581 "nvme_io": false, 00:04:43.581 "nvme_io_md": false, 00:04:43.581 "write_zeroes": true, 00:04:43.581 "zcopy": true, 00:04:43.581 "get_zone_info": false, 00:04:43.581 "zone_management": false, 00:04:43.581 "zone_append": false, 00:04:43.581 "compare": false, 00:04:43.581 "compare_and_write": false, 00:04:43.581 "abort": true, 00:04:43.581 "seek_hole": false, 00:04:43.581 "seek_data": false, 00:04:43.581 "copy": true, 00:04:43.581 "nvme_iov_md": false 00:04:43.581 }, 00:04:43.581 "memory_domains": [ 00:04:43.581 { 00:04:43.581 "dma_device_id": "system", 00:04:43.581 "dma_device_type": 1 00:04:43.581 }, 00:04:43.581 { 00:04:43.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.581 "dma_device_type": 2 00:04:43.581 } 00:04:43.581 ], 00:04:43.581 "driver_specific": {} 00:04:43.581 } 00:04:43.581 ]' 00:04:43.581 21:48:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:43.581 21:48:02 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:43.581 21:48:02 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:43.581 21:48:02 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.581 21:48:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.581 21:48:02 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.581 21:48:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:43.581 21:48:02 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.581 21:48:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.581 21:48:02 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.581 21:48:02 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:43.581 21:48:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:43.581 21:48:02 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:43.581 00:04:43.581 real 0m0.116s 00:04:43.581 user 0m0.073s 00:04:43.581 sys 0m0.012s 00:04:43.581 21:48:02 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.581 21:48:02 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.581 ************************************ 00:04:43.581 END TEST rpc_plugins 00:04:43.581 ************************************ 00:04:43.581 21:48:02 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:43.581 21:48:02 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:43.581 21:48:02 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.581 21:48:02 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.581 21:48:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.841 ************************************ 00:04:43.841 START TEST rpc_trace_cmd_test 00:04:43.841 ************************************ 00:04:43.841 21:48:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:43.841 21:48:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:43.841 21:48:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:43.841 21:48:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.841 21:48:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:43.841 21:48:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.841 21:48:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:43.841 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3929780", 00:04:43.841 "tpoint_group_mask": "0x8", 00:04:43.841 "iscsi_conn": { 00:04:43.841 "mask": "0x2", 00:04:43.841 "tpoint_mask": "0x0" 00:04:43.841 }, 00:04:43.841 "scsi": { 00:04:43.841 "mask": "0x4", 00:04:43.841 "tpoint_mask": "0x0" 00:04:43.841 }, 00:04:43.841 "bdev": { 00:04:43.841 "mask": "0x8", 00:04:43.841 "tpoint_mask": "0xffffffffffffffff" 00:04:43.841 }, 00:04:43.841 "nvmf_rdma": { 00:04:43.841 "mask": "0x10", 00:04:43.841 "tpoint_mask": "0x0" 00:04:43.841 }, 00:04:43.841 "nvmf_tcp": { 00:04:43.841 "mask": "0x20", 00:04:43.841 "tpoint_mask": "0x0" 00:04:43.841 }, 00:04:43.841 "ftl": { 00:04:43.841 "mask": "0x40", 00:04:43.841 "tpoint_mask": "0x0" 00:04:43.841 }, 00:04:43.841 "blobfs": { 00:04:43.841 "mask": "0x80", 00:04:43.841 "tpoint_mask": "0x0" 00:04:43.841 }, 00:04:43.841 "dsa": { 00:04:43.841 "mask": "0x200", 00:04:43.841 "tpoint_mask": "0x0" 00:04:43.841 }, 00:04:43.841 "thread": { 00:04:43.841 "mask": "0x400", 00:04:43.841 "tpoint_mask": "0x0" 00:04:43.841 }, 00:04:43.841 "nvme_pcie": { 00:04:43.841 "mask": "0x800", 00:04:43.841 "tpoint_mask": "0x0" 00:04:43.841 }, 00:04:43.841 "iaa": { 00:04:43.841 "mask": "0x1000", 00:04:43.841 "tpoint_mask": "0x0" 00:04:43.841 }, 00:04:43.841 "nvme_tcp": { 00:04:43.841 "mask": "0x2000", 00:04:43.841 "tpoint_mask": "0x0" 00:04:43.841 }, 00:04:43.841 "bdev_nvme": { 00:04:43.841 "mask": "0x4000", 00:04:43.841 "tpoint_mask": "0x0" 00:04:43.841 }, 00:04:43.841 "sock": { 00:04:43.841 "mask": "0x8000", 00:04:43.841 "tpoint_mask": "0x0" 00:04:43.841 } 00:04:43.841 }' 00:04:43.841 21:48:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:43.841 21:48:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:43.841 21:48:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:43.841 21:48:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:43.841 21:48:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:43.841 21:48:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:43.841 21:48:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:43.841 21:48:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:43.841 21:48:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:43.841 21:48:03 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:43.841 00:04:43.841 real 0m0.197s 00:04:43.841 user 0m0.169s 00:04:43.841 sys 0m0.018s 00:04:43.841 21:48:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.841 21:48:03 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:43.841 ************************************ 00:04:43.841 END TEST rpc_trace_cmd_test 00:04:43.841 ************************************ 00:04:43.841 21:48:03 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:43.841 21:48:03 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:43.841 21:48:03 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:43.841 21:48:03 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:43.841 21:48:03 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.841 21:48:03 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.841 21:48:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.841 ************************************ 00:04:43.841 START TEST rpc_daemon_integrity 00:04:43.841 ************************************ 00:04:43.841 21:48:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:43.841 21:48:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:43.841 21:48:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.841 21:48:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.841 21:48:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.841 21:48:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:43.841 21:48:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:44.099 21:48:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:44.099 21:48:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:44.099 21:48:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.099 21:48:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:44.100 { 00:04:44.100 "name": "Malloc2", 00:04:44.100 "aliases": [ 00:04:44.100 "d40671bf-0bdd-4c3e-83aa-cfd75b4a2680" 00:04:44.100 ], 00:04:44.100 "product_name": "Malloc disk", 00:04:44.100 "block_size": 512, 00:04:44.100 "num_blocks": 16384, 00:04:44.100 "uuid": "d40671bf-0bdd-4c3e-83aa-cfd75b4a2680", 00:04:44.100 "assigned_rate_limits": { 00:04:44.100 "rw_ios_per_sec": 0, 00:04:44.100 "rw_mbytes_per_sec": 0, 00:04:44.100 "r_mbytes_per_sec": 0, 00:04:44.100 "w_mbytes_per_sec": 0 00:04:44.100 }, 00:04:44.100 "claimed": false, 00:04:44.100 "zoned": false, 00:04:44.100 "supported_io_types": { 00:04:44.100 "read": true, 00:04:44.100 "write": true, 00:04:44.100 "unmap": true, 00:04:44.100 "flush": true, 00:04:44.100 "reset": true, 00:04:44.100 "nvme_admin": false, 00:04:44.100 "nvme_io": false, 00:04:44.100 "nvme_io_md": false, 00:04:44.100 "write_zeroes": true, 00:04:44.100 "zcopy": true, 00:04:44.100 "get_zone_info": false, 00:04:44.100 "zone_management": false, 00:04:44.100 "zone_append": false, 00:04:44.100 "compare": false, 00:04:44.100 "compare_and_write": false, 00:04:44.100 "abort": true, 00:04:44.100 "seek_hole": false, 00:04:44.100 "seek_data": false, 00:04:44.100 "copy": true, 00:04:44.100 "nvme_iov_md": false 00:04:44.100 }, 00:04:44.100 "memory_domains": [ 00:04:44.100 { 00:04:44.100 "dma_device_id": "system", 00:04:44.100 "dma_device_type": 1 00:04:44.100 }, 00:04:44.100 { 00:04:44.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.100 "dma_device_type": 2 00:04:44.100 } 00:04:44.100 ], 00:04:44.100 "driver_specific": {} 00:04:44.100 } 00:04:44.100 ]' 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.100 [2024-07-13 21:48:03.338576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:44.100 [2024-07-13 21:48:03.338642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:44.100 [2024-07-13 21:48:03.338682] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000023a80 00:04:44.100 [2024-07-13 21:48:03.338709] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:44.100 [2024-07-13 21:48:03.341384] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:44.100 [2024-07-13 21:48:03.341427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:44.100 Passthru0 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:44.100 { 00:04:44.100 "name": "Malloc2", 00:04:44.100 "aliases": [ 00:04:44.100 "d40671bf-0bdd-4c3e-83aa-cfd75b4a2680" 00:04:44.100 ], 00:04:44.100 "product_name": "Malloc disk", 00:04:44.100 "block_size": 512, 00:04:44.100 "num_blocks": 16384, 00:04:44.100 "uuid": "d40671bf-0bdd-4c3e-83aa-cfd75b4a2680", 00:04:44.100 "assigned_rate_limits": { 00:04:44.100 "rw_ios_per_sec": 0, 00:04:44.100 "rw_mbytes_per_sec": 0, 00:04:44.100 "r_mbytes_per_sec": 0, 00:04:44.100 "w_mbytes_per_sec": 0 00:04:44.100 }, 00:04:44.100 "claimed": true, 00:04:44.100 "claim_type": "exclusive_write", 00:04:44.100 "zoned": false, 00:04:44.100 "supported_io_types": { 00:04:44.100 "read": true, 00:04:44.100 "write": true, 00:04:44.100 "unmap": true, 00:04:44.100 "flush": true, 00:04:44.100 "reset": true, 00:04:44.100 "nvme_admin": false, 00:04:44.100 "nvme_io": false, 00:04:44.100 "nvme_io_md": false, 00:04:44.100 "write_zeroes": true, 00:04:44.100 "zcopy": true, 00:04:44.100 "get_zone_info": false, 00:04:44.100 "zone_management": false, 00:04:44.100 "zone_append": false, 00:04:44.100 "compare": false, 00:04:44.100 "compare_and_write": false, 00:04:44.100 "abort": true, 00:04:44.100 "seek_hole": false, 00:04:44.100 "seek_data": false, 00:04:44.100 "copy": true, 00:04:44.100 "nvme_iov_md": false 00:04:44.100 }, 00:04:44.100 "memory_domains": [ 00:04:44.100 { 00:04:44.100 "dma_device_id": "system", 00:04:44.100 "dma_device_type": 1 00:04:44.100 }, 00:04:44.100 { 00:04:44.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.100 "dma_device_type": 2 00:04:44.100 } 00:04:44.100 ], 00:04:44.100 "driver_specific": {} 00:04:44.100 }, 00:04:44.100 { 00:04:44.100 "name": "Passthru0", 00:04:44.100 "aliases": [ 00:04:44.100 "ab869fbd-603f-5677-80cc-5bb4f259c211" 00:04:44.100 ], 00:04:44.100 "product_name": "passthru", 00:04:44.100 "block_size": 512, 00:04:44.100 "num_blocks": 16384, 00:04:44.100 "uuid": "ab869fbd-603f-5677-80cc-5bb4f259c211", 00:04:44.100 "assigned_rate_limits": { 00:04:44.100 "rw_ios_per_sec": 0, 00:04:44.100 "rw_mbytes_per_sec": 0, 00:04:44.100 "r_mbytes_per_sec": 0, 00:04:44.100 "w_mbytes_per_sec": 0 00:04:44.100 }, 00:04:44.100 "claimed": false, 00:04:44.100 "zoned": false, 00:04:44.100 "supported_io_types": { 00:04:44.100 "read": true, 00:04:44.100 "write": true, 00:04:44.100 "unmap": true, 00:04:44.100 "flush": true, 00:04:44.100 "reset": true, 00:04:44.100 "nvme_admin": false, 00:04:44.100 "nvme_io": false, 00:04:44.100 "nvme_io_md": false, 00:04:44.100 "write_zeroes": true, 00:04:44.100 "zcopy": true, 00:04:44.100 "get_zone_info": false, 00:04:44.100 "zone_management": false, 00:04:44.100 "zone_append": false, 00:04:44.100 "compare": false, 00:04:44.100 "compare_and_write": false, 00:04:44.100 "abort": true, 00:04:44.100 "seek_hole": false, 00:04:44.100 "seek_data": false, 00:04:44.100 "copy": true, 00:04:44.100 "nvme_iov_md": false 00:04:44.100 }, 00:04:44.100 "memory_domains": [ 00:04:44.100 { 00:04:44.100 "dma_device_id": "system", 00:04:44.100 "dma_device_type": 1 00:04:44.100 }, 00:04:44.100 { 00:04:44.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.100 "dma_device_type": 2 00:04:44.100 } 00:04:44.100 ], 00:04:44.100 "driver_specific": { 00:04:44.100 "passthru": { 00:04:44.100 "name": "Passthru0", 00:04:44.100 "base_bdev_name": "Malloc2" 00:04:44.100 } 00:04:44.100 } 00:04:44.100 } 00:04:44.100 ]' 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:44.100 00:04:44.100 real 0m0.260s 00:04:44.100 user 0m0.151s 00:04:44.100 sys 0m0.023s 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.100 21:48:03 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.100 ************************************ 00:04:44.100 END TEST rpc_daemon_integrity 00:04:44.100 ************************************ 00:04:44.358 21:48:03 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:44.358 21:48:03 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:44.358 21:48:03 rpc -- rpc/rpc.sh@84 -- # killprocess 3929780 00:04:44.358 21:48:03 rpc -- common/autotest_common.sh@948 -- # '[' -z 3929780 ']' 00:04:44.358 21:48:03 rpc -- common/autotest_common.sh@952 -- # kill -0 3929780 00:04:44.358 21:48:03 rpc -- common/autotest_common.sh@953 -- # uname 00:04:44.358 21:48:03 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:44.358 21:48:03 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3929780 00:04:44.358 21:48:03 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:44.358 21:48:03 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:44.358 21:48:03 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3929780' 00:04:44.358 killing process with pid 3929780 00:04:44.358 21:48:03 rpc -- common/autotest_common.sh@967 -- # kill 3929780 00:04:44.358 21:48:03 rpc -- common/autotest_common.sh@972 -- # wait 3929780 00:04:46.895 00:04:46.895 real 0m4.899s 00:04:46.895 user 0m5.362s 00:04:46.895 sys 0m0.824s 00:04:46.895 21:48:05 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.895 21:48:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.895 ************************************ 00:04:46.895 END TEST rpc 00:04:46.895 ************************************ 00:04:46.895 21:48:06 -- common/autotest_common.sh@1142 -- # return 0 00:04:46.895 21:48:06 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:46.895 21:48:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.895 21:48:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.895 21:48:06 -- common/autotest_common.sh@10 -- # set +x 00:04:46.895 ************************************ 00:04:46.895 START TEST skip_rpc 00:04:46.895 ************************************ 00:04:46.895 21:48:06 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:46.895 * Looking for test storage... 00:04:46.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:46.895 21:48:06 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:46.895 21:48:06 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:46.895 21:48:06 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:46.895 21:48:06 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.895 21:48:06 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.895 21:48:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.895 ************************************ 00:04:46.895 START TEST skip_rpc 00:04:46.895 ************************************ 00:04:46.895 21:48:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:46.895 21:48:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3930529 00:04:46.895 21:48:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:46.895 21:48:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.895 21:48:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:46.895 [2024-07-13 21:48:06.213777] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:46.896 [2024-07-13 21:48:06.213935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3930529 ] 00:04:46.896 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.155 [2024-07-13 21:48:06.345414] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.415 [2024-07-13 21:48:06.597746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.679 21:48:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:52.679 21:48:11 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:52.679 21:48:11 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:52.679 21:48:11 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:52.679 21:48:11 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.679 21:48:11 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:52.679 21:48:11 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:52.679 21:48:11 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:52.679 21:48:11 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.679 21:48:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.679 21:48:11 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:52.679 21:48:11 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:52.679 21:48:11 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:52.679 21:48:11 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:52.679 21:48:11 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:52.679 21:48:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:52.679 21:48:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3930529 00:04:52.679 21:48:11 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 3930529 ']' 00:04:52.679 21:48:11 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 3930529 00:04:52.679 21:48:11 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:52.679 21:48:11 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:52.679 21:48:11 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3930529 00:04:52.679 21:48:11 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:52.679 21:48:11 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:52.679 21:48:11 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3930529' 00:04:52.679 killing process with pid 3930529 00:04:52.679 21:48:11 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 3930529 00:04:52.679 21:48:11 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 3930529 00:04:54.640 00:04:54.640 real 0m7.523s 00:04:54.640 user 0m7.022s 00:04:54.640 sys 0m0.484s 00:04:54.640 21:48:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.640 21:48:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.640 ************************************ 00:04:54.640 END TEST skip_rpc 00:04:54.640 ************************************ 00:04:54.640 21:48:13 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:54.640 21:48:13 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:54.640 21:48:13 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.640 21:48:13 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.640 21:48:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.640 ************************************ 00:04:54.640 START TEST skip_rpc_with_json 00:04:54.640 ************************************ 00:04:54.640 21:48:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:54.640 21:48:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:54.640 21:48:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3931987 00:04:54.640 21:48:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:54.640 21:48:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.640 21:48:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3931987 00:04:54.640 21:48:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 3931987 ']' 00:04:54.640 21:48:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.640 21:48:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:54.640 21:48:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.640 21:48:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:54.640 21:48:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.640 [2024-07-13 21:48:13.792736] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:54.640 [2024-07-13 21:48:13.792902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3931987 ] 00:04:54.640 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.640 [2024-07-13 21:48:13.926879] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.900 [2024-07-13 21:48:14.181019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.837 21:48:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:55.837 21:48:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:55.837 21:48:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:55.837 21:48:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.837 21:48:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:55.837 [2024-07-13 21:48:15.081275] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:55.837 request: 00:04:55.837 { 00:04:55.837 "trtype": "tcp", 00:04:55.837 "method": "nvmf_get_transports", 00:04:55.837 "req_id": 1 00:04:55.837 } 00:04:55.837 Got JSON-RPC error response 00:04:55.837 response: 00:04:55.837 { 00:04:55.837 "code": -19, 00:04:55.837 "message": "No such device" 00:04:55.837 } 00:04:55.837 21:48:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:55.837 21:48:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:55.837 21:48:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.837 21:48:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:55.837 [2024-07-13 21:48:15.089409] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:55.837 21:48:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.837 21:48:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:55.837 21:48:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.837 21:48:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.097 21:48:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.097 21:48:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:56.097 { 00:04:56.097 "subsystems": [ 00:04:56.097 { 00:04:56.097 "subsystem": "keyring", 00:04:56.097 "config": [] 00:04:56.097 }, 00:04:56.097 { 00:04:56.097 "subsystem": "iobuf", 00:04:56.097 "config": [ 00:04:56.097 { 00:04:56.097 "method": "iobuf_set_options", 00:04:56.097 "params": { 00:04:56.097 "small_pool_count": 8192, 00:04:56.097 "large_pool_count": 1024, 00:04:56.097 "small_bufsize": 8192, 00:04:56.097 "large_bufsize": 135168 00:04:56.097 } 00:04:56.097 } 00:04:56.097 ] 00:04:56.097 }, 00:04:56.097 { 00:04:56.097 "subsystem": "sock", 00:04:56.097 "config": [ 00:04:56.097 { 00:04:56.097 "method": "sock_set_default_impl", 00:04:56.097 "params": { 00:04:56.097 "impl_name": "posix" 00:04:56.097 } 00:04:56.097 }, 00:04:56.097 { 00:04:56.097 "method": "sock_impl_set_options", 00:04:56.097 "params": { 00:04:56.097 "impl_name": "ssl", 00:04:56.097 "recv_buf_size": 4096, 00:04:56.097 "send_buf_size": 4096, 00:04:56.097 "enable_recv_pipe": true, 00:04:56.097 "enable_quickack": false, 00:04:56.097 "enable_placement_id": 0, 00:04:56.097 "enable_zerocopy_send_server": true, 00:04:56.097 "enable_zerocopy_send_client": false, 00:04:56.097 "zerocopy_threshold": 0, 00:04:56.097 "tls_version": 0, 00:04:56.097 "enable_ktls": false 00:04:56.097 } 00:04:56.097 }, 00:04:56.097 { 00:04:56.097 "method": "sock_impl_set_options", 00:04:56.097 "params": { 00:04:56.097 "impl_name": "posix", 00:04:56.097 "recv_buf_size": 2097152, 00:04:56.097 "send_buf_size": 2097152, 00:04:56.097 "enable_recv_pipe": true, 00:04:56.097 "enable_quickack": false, 00:04:56.097 "enable_placement_id": 0, 00:04:56.097 "enable_zerocopy_send_server": true, 00:04:56.097 "enable_zerocopy_send_client": false, 00:04:56.097 "zerocopy_threshold": 0, 00:04:56.097 "tls_version": 0, 00:04:56.097 "enable_ktls": false 00:04:56.097 } 00:04:56.097 } 00:04:56.097 ] 00:04:56.097 }, 00:04:56.097 { 00:04:56.097 "subsystem": "vmd", 00:04:56.097 "config": [] 00:04:56.097 }, 00:04:56.097 { 00:04:56.097 "subsystem": "accel", 00:04:56.097 "config": [ 00:04:56.097 { 00:04:56.097 "method": "accel_set_options", 00:04:56.097 "params": { 00:04:56.097 "small_cache_size": 128, 00:04:56.097 "large_cache_size": 16, 00:04:56.097 "task_count": 2048, 00:04:56.097 "sequence_count": 2048, 00:04:56.097 "buf_count": 2048 00:04:56.097 } 00:04:56.097 } 00:04:56.097 ] 00:04:56.097 }, 00:04:56.097 { 00:04:56.097 "subsystem": "bdev", 00:04:56.097 "config": [ 00:04:56.097 { 00:04:56.097 "method": "bdev_set_options", 00:04:56.097 "params": { 00:04:56.097 "bdev_io_pool_size": 65535, 00:04:56.097 "bdev_io_cache_size": 256, 00:04:56.097 "bdev_auto_examine": true, 00:04:56.097 "iobuf_small_cache_size": 128, 00:04:56.097 "iobuf_large_cache_size": 16 00:04:56.097 } 00:04:56.097 }, 00:04:56.097 { 00:04:56.097 "method": "bdev_raid_set_options", 00:04:56.097 "params": { 00:04:56.097 "process_window_size_kb": 1024 00:04:56.097 } 00:04:56.097 }, 00:04:56.097 { 00:04:56.097 "method": "bdev_iscsi_set_options", 00:04:56.097 "params": { 00:04:56.097 "timeout_sec": 30 00:04:56.097 } 00:04:56.097 }, 00:04:56.097 { 00:04:56.097 "method": "bdev_nvme_set_options", 00:04:56.097 "params": { 00:04:56.097 "action_on_timeout": "none", 00:04:56.097 "timeout_us": 0, 00:04:56.097 "timeout_admin_us": 0, 00:04:56.097 "keep_alive_timeout_ms": 10000, 00:04:56.097 "arbitration_burst": 0, 00:04:56.097 "low_priority_weight": 0, 00:04:56.097 "medium_priority_weight": 0, 00:04:56.097 "high_priority_weight": 0, 00:04:56.097 "nvme_adminq_poll_period_us": 10000, 00:04:56.097 "nvme_ioq_poll_period_us": 0, 00:04:56.097 "io_queue_requests": 0, 00:04:56.097 "delay_cmd_submit": true, 00:04:56.097 "transport_retry_count": 4, 00:04:56.097 "bdev_retry_count": 3, 00:04:56.097 "transport_ack_timeout": 0, 00:04:56.097 "ctrlr_loss_timeout_sec": 0, 00:04:56.097 "reconnect_delay_sec": 0, 00:04:56.097 "fast_io_fail_timeout_sec": 0, 00:04:56.097 "disable_auto_failback": false, 00:04:56.097 "generate_uuids": false, 00:04:56.097 "transport_tos": 0, 00:04:56.097 "nvme_error_stat": false, 00:04:56.097 "rdma_srq_size": 0, 00:04:56.097 "io_path_stat": false, 00:04:56.097 "allow_accel_sequence": false, 00:04:56.097 "rdma_max_cq_size": 0, 00:04:56.097 "rdma_cm_event_timeout_ms": 0, 00:04:56.097 "dhchap_digests": [ 00:04:56.097 "sha256", 00:04:56.097 "sha384", 00:04:56.097 "sha512" 00:04:56.097 ], 00:04:56.097 "dhchap_dhgroups": [ 00:04:56.097 "null", 00:04:56.097 "ffdhe2048", 00:04:56.097 "ffdhe3072", 00:04:56.097 "ffdhe4096", 00:04:56.097 "ffdhe6144", 00:04:56.097 "ffdhe8192" 00:04:56.097 ] 00:04:56.097 } 00:04:56.097 }, 00:04:56.097 { 00:04:56.097 "method": "bdev_nvme_set_hotplug", 00:04:56.097 "params": { 00:04:56.097 "period_us": 100000, 00:04:56.097 "enable": false 00:04:56.097 } 00:04:56.097 }, 00:04:56.097 { 00:04:56.097 "method": "bdev_wait_for_examine" 00:04:56.097 } 00:04:56.097 ] 00:04:56.097 }, 00:04:56.097 { 00:04:56.097 "subsystem": "scsi", 00:04:56.097 "config": null 00:04:56.097 }, 00:04:56.097 { 00:04:56.097 "subsystem": "scheduler", 00:04:56.097 "config": [ 00:04:56.097 { 00:04:56.097 "method": "framework_set_scheduler", 00:04:56.097 "params": { 00:04:56.097 "name": "static" 00:04:56.097 } 00:04:56.097 } 00:04:56.097 ] 00:04:56.097 }, 00:04:56.097 { 00:04:56.097 "subsystem": "vhost_scsi", 00:04:56.097 "config": [] 00:04:56.097 }, 00:04:56.097 { 00:04:56.097 "subsystem": "vhost_blk", 00:04:56.097 "config": [] 00:04:56.097 }, 00:04:56.097 { 00:04:56.097 "subsystem": "ublk", 00:04:56.098 "config": [] 00:04:56.098 }, 00:04:56.098 { 00:04:56.098 "subsystem": "nbd", 00:04:56.098 "config": [] 00:04:56.098 }, 00:04:56.098 { 00:04:56.098 "subsystem": "nvmf", 00:04:56.098 "config": [ 00:04:56.098 { 00:04:56.098 "method": "nvmf_set_config", 00:04:56.098 "params": { 00:04:56.098 "discovery_filter": "match_any", 00:04:56.098 "admin_cmd_passthru": { 00:04:56.098 "identify_ctrlr": false 00:04:56.098 } 00:04:56.098 } 00:04:56.098 }, 00:04:56.098 { 00:04:56.098 "method": "nvmf_set_max_subsystems", 00:04:56.098 "params": { 00:04:56.098 "max_subsystems": 1024 00:04:56.098 } 00:04:56.098 }, 00:04:56.098 { 00:04:56.098 "method": "nvmf_set_crdt", 00:04:56.098 "params": { 00:04:56.098 "crdt1": 0, 00:04:56.098 "crdt2": 0, 00:04:56.098 "crdt3": 0 00:04:56.098 } 00:04:56.098 }, 00:04:56.098 { 00:04:56.098 "method": "nvmf_create_transport", 00:04:56.098 "params": { 00:04:56.098 "trtype": "TCP", 00:04:56.098 "max_queue_depth": 128, 00:04:56.098 "max_io_qpairs_per_ctrlr": 127, 00:04:56.098 "in_capsule_data_size": 4096, 00:04:56.098 "max_io_size": 131072, 00:04:56.098 "io_unit_size": 131072, 00:04:56.098 "max_aq_depth": 128, 00:04:56.098 "num_shared_buffers": 511, 00:04:56.098 "buf_cache_size": 4294967295, 00:04:56.098 "dif_insert_or_strip": false, 00:04:56.098 "zcopy": false, 00:04:56.098 "c2h_success": true, 00:04:56.098 "sock_priority": 0, 00:04:56.098 "abort_timeout_sec": 1, 00:04:56.098 "ack_timeout": 0, 00:04:56.098 "data_wr_pool_size": 0 00:04:56.098 } 00:04:56.098 } 00:04:56.098 ] 00:04:56.098 }, 00:04:56.098 { 00:04:56.098 "subsystem": "iscsi", 00:04:56.098 "config": [ 00:04:56.098 { 00:04:56.098 "method": "iscsi_set_options", 00:04:56.098 "params": { 00:04:56.098 "node_base": "iqn.2016-06.io.spdk", 00:04:56.098 "max_sessions": 128, 00:04:56.098 "max_connections_per_session": 2, 00:04:56.098 "max_queue_depth": 64, 00:04:56.098 "default_time2wait": 2, 00:04:56.098 "default_time2retain": 20, 00:04:56.098 "first_burst_length": 8192, 00:04:56.098 "immediate_data": true, 00:04:56.098 "allow_duplicated_isid": false, 00:04:56.098 "error_recovery_level": 0, 00:04:56.098 "nop_timeout": 60, 00:04:56.098 "nop_in_interval": 30, 00:04:56.098 "disable_chap": false, 00:04:56.098 "require_chap": false, 00:04:56.098 "mutual_chap": false, 00:04:56.098 "chap_group": 0, 00:04:56.098 "max_large_datain_per_connection": 64, 00:04:56.098 "max_r2t_per_connection": 4, 00:04:56.098 "pdu_pool_size": 36864, 00:04:56.098 "immediate_data_pool_size": 16384, 00:04:56.098 "data_out_pool_size": 2048 00:04:56.098 } 00:04:56.098 } 00:04:56.098 ] 00:04:56.098 } 00:04:56.098 ] 00:04:56.098 } 00:04:56.098 21:48:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:56.098 21:48:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3931987 00:04:56.098 21:48:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3931987 ']' 00:04:56.098 21:48:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3931987 00:04:56.098 21:48:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:56.098 21:48:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:56.098 21:48:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3931987 00:04:56.098 21:48:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:56.098 21:48:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:56.098 21:48:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3931987' 00:04:56.098 killing process with pid 3931987 00:04:56.098 21:48:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3931987 00:04:56.098 21:48:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3931987 00:04:58.631 21:48:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3932407 00:04:58.631 21:48:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:58.631 21:48:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:03.906 21:48:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3932407 00:05:03.906 21:48:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3932407 ']' 00:05:03.906 21:48:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3932407 00:05:03.906 21:48:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:03.906 21:48:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:03.906 21:48:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3932407 00:05:03.906 21:48:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:03.906 21:48:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:03.906 21:48:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3932407' 00:05:03.906 killing process with pid 3932407 00:05:03.906 21:48:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3932407 00:05:03.906 21:48:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3932407 00:05:06.444 21:48:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:06.444 21:48:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:06.444 00:05:06.444 real 0m11.595s 00:05:06.444 user 0m11.018s 00:05:06.444 sys 0m1.066s 00:05:06.444 21:48:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.444 21:48:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.444 ************************************ 00:05:06.444 END TEST skip_rpc_with_json 00:05:06.444 ************************************ 00:05:06.444 21:48:25 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:06.444 21:48:25 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:06.444 21:48:25 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.444 21:48:25 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.444 21:48:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.444 ************************************ 00:05:06.444 START TEST skip_rpc_with_delay 00:05:06.444 ************************************ 00:05:06.444 21:48:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:06.444 21:48:25 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:06.444 21:48:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:06.444 21:48:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:06.444 21:48:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.444 21:48:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:06.444 21:48:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.444 21:48:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:06.444 21:48:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.444 21:48:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:06.444 21:48:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:06.444 21:48:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:06.444 21:48:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:06.444 [2024-07-13 21:48:25.438259] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:06.444 [2024-07-13 21:48:25.438431] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:06.444 21:48:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:06.444 21:48:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:06.444 21:48:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:06.444 21:48:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:06.444 00:05:06.444 real 0m0.143s 00:05:06.444 user 0m0.079s 00:05:06.444 sys 0m0.063s 00:05:06.444 21:48:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.444 21:48:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:06.444 ************************************ 00:05:06.444 END TEST skip_rpc_with_delay 00:05:06.444 ************************************ 00:05:06.444 21:48:25 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:06.444 21:48:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:06.444 21:48:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:06.444 21:48:25 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:06.445 21:48:25 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.445 21:48:25 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.445 21:48:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.445 ************************************ 00:05:06.445 START TEST exit_on_failed_rpc_init 00:05:06.445 ************************************ 00:05:06.445 21:48:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:06.445 21:48:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3933391 00:05:06.445 21:48:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:06.445 21:48:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3933391 00:05:06.445 21:48:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 3933391 ']' 00:05:06.445 21:48:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.445 21:48:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:06.445 21:48:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.445 21:48:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:06.445 21:48:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:06.445 [2024-07-13 21:48:25.632502] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:06.445 [2024-07-13 21:48:25.632648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3933391 ] 00:05:06.445 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.445 [2024-07-13 21:48:25.764169] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.705 [2024-07-13 21:48:26.019687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.642 21:48:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.642 21:48:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:07.642 21:48:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.642 21:48:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:07.642 21:48:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:07.642 21:48:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:07.642 21:48:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.642 21:48:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.642 21:48:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.642 21:48:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.642 21:48:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.642 21:48:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.642 21:48:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.642 21:48:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:07.642 21:48:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:07.642 [2024-07-13 21:48:27.012565] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:07.642 [2024-07-13 21:48:27.012699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3933530 ] 00:05:07.901 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.901 [2024-07-13 21:48:27.144332] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.162 [2024-07-13 21:48:27.398286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.162 [2024-07-13 21:48:27.398437] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:08.162 [2024-07-13 21:48:27.398472] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:08.162 [2024-07-13 21:48:27.398498] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:08.731 21:48:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:08.731 21:48:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:08.731 21:48:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:08.731 21:48:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:08.731 21:48:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:08.731 21:48:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:08.731 21:48:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:08.731 21:48:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3933391 00:05:08.731 21:48:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 3933391 ']' 00:05:08.731 21:48:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 3933391 00:05:08.731 21:48:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:08.731 21:48:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:08.731 21:48:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3933391 00:05:08.731 21:48:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:08.731 21:48:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:08.731 21:48:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3933391' 00:05:08.731 killing process with pid 3933391 00:05:08.731 21:48:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 3933391 00:05:08.731 21:48:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 3933391 00:05:11.267 00:05:11.267 real 0m4.888s 00:05:11.267 user 0m5.557s 00:05:11.267 sys 0m0.768s 00:05:11.267 21:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.267 21:48:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:11.267 ************************************ 00:05:11.267 END TEST exit_on_failed_rpc_init 00:05:11.267 ************************************ 00:05:11.267 21:48:30 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:11.267 21:48:30 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:11.267 00:05:11.267 real 0m24.404s 00:05:11.267 user 0m23.779s 00:05:11.267 sys 0m2.550s 00:05:11.267 21:48:30 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.267 21:48:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.267 ************************************ 00:05:11.267 END TEST skip_rpc 00:05:11.267 ************************************ 00:05:11.267 21:48:30 -- common/autotest_common.sh@1142 -- # return 0 00:05:11.267 21:48:30 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:11.267 21:48:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.267 21:48:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.267 21:48:30 -- common/autotest_common.sh@10 -- # set +x 00:05:11.267 ************************************ 00:05:11.267 START TEST rpc_client 00:05:11.267 ************************************ 00:05:11.267 21:48:30 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:11.267 * Looking for test storage... 00:05:11.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:11.267 21:48:30 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:11.267 OK 00:05:11.267 21:48:30 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:11.267 00:05:11.267 real 0m0.100s 00:05:11.267 user 0m0.041s 00:05:11.267 sys 0m0.064s 00:05:11.267 21:48:30 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.267 21:48:30 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:11.267 ************************************ 00:05:11.267 END TEST rpc_client 00:05:11.267 ************************************ 00:05:11.267 21:48:30 -- common/autotest_common.sh@1142 -- # return 0 00:05:11.267 21:48:30 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:11.267 21:48:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.267 21:48:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.267 21:48:30 -- common/autotest_common.sh@10 -- # set +x 00:05:11.267 ************************************ 00:05:11.267 START TEST json_config 00:05:11.267 ************************************ 00:05:11.267 21:48:30 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:11.526 21:48:30 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:11.526 21:48:30 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:11.526 21:48:30 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:11.526 21:48:30 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:11.526 21:48:30 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:11.526 21:48:30 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:11.526 21:48:30 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:11.526 21:48:30 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:11.526 21:48:30 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:11.526 21:48:30 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:11.526 21:48:30 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:11.526 21:48:30 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:11.526 21:48:30 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:11.526 21:48:30 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:11.526 21:48:30 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:11.526 21:48:30 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:11.526 21:48:30 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:11.526 21:48:30 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:11.526 21:48:30 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:11.526 21:48:30 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:11.526 21:48:30 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:11.526 21:48:30 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:11.526 21:48:30 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.526 21:48:30 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.526 21:48:30 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.526 21:48:30 json_config -- paths/export.sh@5 -- # export PATH 00:05:11.527 21:48:30 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.527 21:48:30 json_config -- nvmf/common.sh@47 -- # : 0 00:05:11.527 21:48:30 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:11.527 21:48:30 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:11.527 21:48:30 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:11.527 21:48:30 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:11.527 21:48:30 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:11.527 21:48:30 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:11.527 21:48:30 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:11.527 21:48:30 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:11.527 21:48:30 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:11.527 21:48:30 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:11.527 21:48:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:11.527 21:48:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:11.527 21:48:30 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:11.527 21:48:30 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:11.527 21:48:30 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:11.527 21:48:30 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:11.527 21:48:30 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:11.527 21:48:30 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:11.527 21:48:30 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:11.527 21:48:30 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:11.527 21:48:30 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:11.527 21:48:30 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:11.527 21:48:30 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:11.527 21:48:30 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:11.527 INFO: JSON configuration test init 00:05:11.527 21:48:30 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:11.527 21:48:30 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:11.527 21:48:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:11.527 21:48:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.527 21:48:30 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:11.527 21:48:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:11.527 21:48:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.527 21:48:30 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:11.527 21:48:30 json_config -- json_config/common.sh@9 -- # local app=target 00:05:11.527 21:48:30 json_config -- json_config/common.sh@10 -- # shift 00:05:11.527 21:48:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:11.527 21:48:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:11.527 21:48:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:11.527 21:48:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.527 21:48:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:11.527 21:48:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3934169 00:05:11.527 21:48:30 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:11.527 21:48:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:11.527 Waiting for target to run... 00:05:11.527 21:48:30 json_config -- json_config/common.sh@25 -- # waitforlisten 3934169 /var/tmp/spdk_tgt.sock 00:05:11.527 21:48:30 json_config -- common/autotest_common.sh@829 -- # '[' -z 3934169 ']' 00:05:11.527 21:48:30 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:11.527 21:48:30 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.527 21:48:30 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:11.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:11.527 21:48:30 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.527 21:48:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.527 [2024-07-13 21:48:30.791292] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:11.527 [2024-07-13 21:48:30.791445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3934169 ] 00:05:11.527 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.095 [2024-07-13 21:48:31.224621] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.095 [2024-07-13 21:48:31.447882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.353 21:48:31 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.353 21:48:31 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:12.353 21:48:31 json_config -- json_config/common.sh@26 -- # echo '' 00:05:12.353 00:05:12.353 21:48:31 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:12.353 21:48:31 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:12.353 21:48:31 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:12.353 21:48:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.353 21:48:31 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:12.353 21:48:31 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:12.353 21:48:31 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:12.353 21:48:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.353 21:48:31 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:12.353 21:48:31 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:12.353 21:48:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:16.607 21:48:35 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:16.607 21:48:35 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:16.607 21:48:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:16.607 21:48:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.607 21:48:35 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:16.607 21:48:35 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:16.607 21:48:35 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:16.607 21:48:35 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:16.607 21:48:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:16.607 21:48:35 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:16.607 21:48:35 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:16.607 21:48:35 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:16.607 21:48:35 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:16.607 21:48:35 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:16.607 21:48:35 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:16.607 21:48:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.607 21:48:35 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:16.607 21:48:35 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:16.607 21:48:35 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:16.607 21:48:35 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:16.607 21:48:35 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:16.607 21:48:35 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:16.607 21:48:35 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:16.607 21:48:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:16.607 21:48:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.607 21:48:35 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:16.607 21:48:35 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:16.607 21:48:35 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:16.607 21:48:35 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:16.607 21:48:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:16.865 MallocForNvmf0 00:05:16.865 21:48:36 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:16.865 21:48:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:17.122 MallocForNvmf1 00:05:17.122 21:48:36 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:17.122 21:48:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:17.380 [2024-07-13 21:48:36.545140] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:17.380 21:48:36 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:17.380 21:48:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:17.638 21:48:36 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:17.638 21:48:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:17.897 21:48:37 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:17.897 21:48:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:18.156 21:48:37 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:18.156 21:48:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:18.156 [2024-07-13 21:48:37.528541] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:18.156 21:48:37 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:18.156 21:48:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:18.156 21:48:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.414 21:48:37 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:18.414 21:48:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:18.414 21:48:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.414 21:48:37 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:18.414 21:48:37 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:18.414 21:48:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:18.672 MallocBdevForConfigChangeCheck 00:05:18.672 21:48:37 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:18.672 21:48:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:18.672 21:48:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.672 21:48:37 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:18.672 21:48:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:18.932 21:48:38 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:18.932 INFO: shutting down applications... 00:05:18.932 21:48:38 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:18.932 21:48:38 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:18.932 21:48:38 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:18.932 21:48:38 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:20.837 Calling clear_iscsi_subsystem 00:05:20.837 Calling clear_nvmf_subsystem 00:05:20.837 Calling clear_nbd_subsystem 00:05:20.837 Calling clear_ublk_subsystem 00:05:20.837 Calling clear_vhost_blk_subsystem 00:05:20.837 Calling clear_vhost_scsi_subsystem 00:05:20.837 Calling clear_bdev_subsystem 00:05:20.837 21:48:39 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:20.837 21:48:39 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:20.837 21:48:39 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:20.837 21:48:39 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:20.837 21:48:39 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:20.837 21:48:39 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:21.096 21:48:40 json_config -- json_config/json_config.sh@345 -- # break 00:05:21.096 21:48:40 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:21.096 21:48:40 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:21.096 21:48:40 json_config -- json_config/common.sh@31 -- # local app=target 00:05:21.096 21:48:40 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:21.096 21:48:40 json_config -- json_config/common.sh@35 -- # [[ -n 3934169 ]] 00:05:21.096 21:48:40 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3934169 00:05:21.096 21:48:40 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:21.096 21:48:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.096 21:48:40 json_config -- json_config/common.sh@41 -- # kill -0 3934169 00:05:21.096 21:48:40 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.665 21:48:40 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.665 21:48:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.665 21:48:40 json_config -- json_config/common.sh@41 -- # kill -0 3934169 00:05:21.665 21:48:40 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.925 21:48:41 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.925 21:48:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.925 21:48:41 json_config -- json_config/common.sh@41 -- # kill -0 3934169 00:05:21.925 21:48:41 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:22.492 21:48:41 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:22.492 21:48:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.492 21:48:41 json_config -- json_config/common.sh@41 -- # kill -0 3934169 00:05:22.492 21:48:41 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:22.492 21:48:41 json_config -- json_config/common.sh@43 -- # break 00:05:22.492 21:48:41 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:22.492 21:48:41 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:22.492 SPDK target shutdown done 00:05:22.492 21:48:41 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:22.492 INFO: relaunching applications... 00:05:22.492 21:48:41 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.492 21:48:41 json_config -- json_config/common.sh@9 -- # local app=target 00:05:22.492 21:48:41 json_config -- json_config/common.sh@10 -- # shift 00:05:22.492 21:48:41 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:22.492 21:48:41 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:22.492 21:48:41 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:22.492 21:48:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:22.492 21:48:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:22.492 21:48:41 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3935630 00:05:22.492 21:48:41 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.492 21:48:41 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:22.492 Waiting for target to run... 00:05:22.492 21:48:41 json_config -- json_config/common.sh@25 -- # waitforlisten 3935630 /var/tmp/spdk_tgt.sock 00:05:22.492 21:48:41 json_config -- common/autotest_common.sh@829 -- # '[' -z 3935630 ']' 00:05:22.492 21:48:41 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:22.492 21:48:41 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.492 21:48:41 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:22.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:22.492 21:48:41 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.492 21:48:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.492 [2024-07-13 21:48:41.880811] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:22.492 [2024-07-13 21:48:41.881052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3935630 ] 00:05:22.750 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.009 [2024-07-13 21:48:42.283273] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.268 [2024-07-13 21:48:42.465486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.462 [2024-07-13 21:48:46.046977] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:27.462 [2024-07-13 21:48:46.079466] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:27.462 21:48:46 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.462 21:48:46 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:27.462 21:48:46 json_config -- json_config/common.sh@26 -- # echo '' 00:05:27.462 00:05:27.462 21:48:46 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:27.462 21:48:46 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:27.462 INFO: Checking if target configuration is the same... 00:05:27.462 21:48:46 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:27.462 21:48:46 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:27.462 21:48:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:27.463 + '[' 2 -ne 2 ']' 00:05:27.463 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:27.463 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:27.463 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:27.463 +++ basename /dev/fd/62 00:05:27.463 ++ mktemp /tmp/62.XXX 00:05:27.463 + tmp_file_1=/tmp/62.XJo 00:05:27.463 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:27.463 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:27.463 + tmp_file_2=/tmp/spdk_tgt_config.json.54X 00:05:27.463 + ret=0 00:05:27.463 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:27.463 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:27.463 + diff -u /tmp/62.XJo /tmp/spdk_tgt_config.json.54X 00:05:27.463 + echo 'INFO: JSON config files are the same' 00:05:27.463 INFO: JSON config files are the same 00:05:27.463 + rm /tmp/62.XJo /tmp/spdk_tgt_config.json.54X 00:05:27.463 + exit 0 00:05:27.463 21:48:46 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:27.463 21:48:46 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:27.463 INFO: changing configuration and checking if this can be detected... 00:05:27.463 21:48:46 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:27.463 21:48:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:27.722 21:48:46 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:27.722 21:48:46 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:27.722 21:48:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:27.722 + '[' 2 -ne 2 ']' 00:05:27.722 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:27.722 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:27.722 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:27.722 +++ basename /dev/fd/62 00:05:27.722 ++ mktemp /tmp/62.XXX 00:05:27.722 + tmp_file_1=/tmp/62.j2Y 00:05:27.722 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:27.722 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:27.722 + tmp_file_2=/tmp/spdk_tgt_config.json.fXU 00:05:27.722 + ret=0 00:05:27.722 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:27.981 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:27.981 + diff -u /tmp/62.j2Y /tmp/spdk_tgt_config.json.fXU 00:05:27.981 + ret=1 00:05:27.981 + echo '=== Start of file: /tmp/62.j2Y ===' 00:05:27.981 + cat /tmp/62.j2Y 00:05:27.981 + echo '=== End of file: /tmp/62.j2Y ===' 00:05:27.981 + echo '' 00:05:27.981 + echo '=== Start of file: /tmp/spdk_tgt_config.json.fXU ===' 00:05:27.981 + cat /tmp/spdk_tgt_config.json.fXU 00:05:27.981 + echo '=== End of file: /tmp/spdk_tgt_config.json.fXU ===' 00:05:27.981 + echo '' 00:05:27.982 + rm /tmp/62.j2Y /tmp/spdk_tgt_config.json.fXU 00:05:27.982 + exit 1 00:05:27.982 21:48:47 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:27.982 INFO: configuration change detected. 00:05:27.982 21:48:47 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:27.982 21:48:47 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:27.982 21:48:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:27.982 21:48:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.982 21:48:47 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:27.982 21:48:47 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:27.982 21:48:47 json_config -- json_config/json_config.sh@317 -- # [[ -n 3935630 ]] 00:05:27.982 21:48:47 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:27.982 21:48:47 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:27.982 21:48:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:27.982 21:48:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.982 21:48:47 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:28.241 21:48:47 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:28.242 21:48:47 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:28.242 21:48:47 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:28.242 21:48:47 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:28.242 21:48:47 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:28.242 21:48:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:28.242 21:48:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.242 21:48:47 json_config -- json_config/json_config.sh@323 -- # killprocess 3935630 00:05:28.242 21:48:47 json_config -- common/autotest_common.sh@948 -- # '[' -z 3935630 ']' 00:05:28.242 21:48:47 json_config -- common/autotest_common.sh@952 -- # kill -0 3935630 00:05:28.242 21:48:47 json_config -- common/autotest_common.sh@953 -- # uname 00:05:28.242 21:48:47 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:28.242 21:48:47 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3935630 00:05:28.242 21:48:47 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:28.242 21:48:47 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:28.242 21:48:47 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3935630' 00:05:28.242 killing process with pid 3935630 00:05:28.242 21:48:47 json_config -- common/autotest_common.sh@967 -- # kill 3935630 00:05:28.242 21:48:47 json_config -- common/autotest_common.sh@972 -- # wait 3935630 00:05:30.781 21:48:49 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:30.781 21:48:49 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:30.781 21:48:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:30.781 21:48:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.781 21:48:49 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:30.781 21:48:49 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:30.781 INFO: Success 00:05:30.781 00:05:30.781 real 0m19.207s 00:05:30.781 user 0m20.762s 00:05:30.781 sys 0m2.221s 00:05:30.781 21:48:49 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.781 21:48:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.781 ************************************ 00:05:30.781 END TEST json_config 00:05:30.781 ************************************ 00:05:30.781 21:48:49 -- common/autotest_common.sh@1142 -- # return 0 00:05:30.781 21:48:49 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:30.781 21:48:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.781 21:48:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.781 21:48:49 -- common/autotest_common.sh@10 -- # set +x 00:05:30.781 ************************************ 00:05:30.781 START TEST json_config_extra_key 00:05:30.781 ************************************ 00:05:30.781 21:48:49 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:30.781 21:48:49 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:30.781 21:48:49 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:30.781 21:48:49 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:30.781 21:48:49 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:30.781 21:48:49 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:30.781 21:48:49 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:30.781 21:48:49 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:30.781 21:48:49 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:30.781 21:48:49 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:30.781 21:48:49 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:30.781 21:48:49 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:30.781 21:48:49 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:30.781 21:48:49 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:30.781 21:48:49 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:30.781 21:48:49 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:30.781 21:48:49 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:30.781 21:48:49 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:30.781 21:48:49 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:30.781 21:48:49 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:30.781 21:48:49 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:30.781 21:48:49 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:30.781 21:48:49 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:30.781 21:48:49 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.781 21:48:49 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.781 21:48:49 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.781 21:48:49 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:30.781 21:48:49 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.781 21:48:49 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:30.781 21:48:49 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:30.781 21:48:49 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:30.781 21:48:49 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:30.781 21:48:49 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:30.782 21:48:49 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:30.782 21:48:49 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:30.782 21:48:49 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:30.782 21:48:49 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:30.782 21:48:49 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:30.782 21:48:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:30.782 21:48:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:30.782 21:48:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:30.782 21:48:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:30.782 21:48:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:30.782 21:48:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:30.782 21:48:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:30.782 21:48:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:30.782 21:48:49 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:30.782 21:48:49 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:30.782 INFO: launching applications... 00:05:30.782 21:48:49 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:30.782 21:48:49 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:30.782 21:48:49 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:30.782 21:48:49 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:30.782 21:48:49 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:30.782 21:48:49 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:30.782 21:48:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:30.782 21:48:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:30.782 21:48:49 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3936685 00:05:30.782 21:48:49 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:30.782 21:48:49 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:30.782 Waiting for target to run... 00:05:30.782 21:48:49 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3936685 /var/tmp/spdk_tgt.sock 00:05:30.782 21:48:49 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 3936685 ']' 00:05:30.782 21:48:49 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:30.782 21:48:49 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.782 21:48:49 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:30.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:30.782 21:48:49 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.782 21:48:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:30.782 [2024-07-13 21:48:50.050539] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:30.782 [2024-07-13 21:48:50.050687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3936685 ] 00:05:30.782 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.349 [2024-07-13 21:48:50.632279] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.609 [2024-07-13 21:48:50.871345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.178 21:48:51 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.178 21:48:51 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:32.178 21:48:51 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:32.178 00:05:32.178 21:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:32.178 INFO: shutting down applications... 00:05:32.178 21:48:51 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:32.178 21:48:51 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:32.178 21:48:51 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:32.178 21:48:51 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3936685 ]] 00:05:32.178 21:48:51 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3936685 00:05:32.178 21:48:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:32.178 21:48:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.178 21:48:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3936685 00:05:32.178 21:48:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:32.795 21:48:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:32.795 21:48:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.795 21:48:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3936685 00:05:32.795 21:48:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:33.364 21:48:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:33.364 21:48:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.364 21:48:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3936685 00:05:33.364 21:48:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:33.932 21:48:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:33.932 21:48:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.932 21:48:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3936685 00:05:33.932 21:48:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:34.190 21:48:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:34.190 21:48:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.190 21:48:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3936685 00:05:34.190 21:48:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:34.756 21:48:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:34.756 21:48:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.756 21:48:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3936685 00:05:34.756 21:48:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:35.323 21:48:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:35.323 21:48:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.323 21:48:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3936685 00:05:35.323 21:48:54 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:35.323 21:48:54 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:35.323 21:48:54 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:35.323 21:48:54 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:35.323 SPDK target shutdown done 00:05:35.323 21:48:54 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:35.323 Success 00:05:35.323 00:05:35.323 real 0m4.683s 00:05:35.323 user 0m4.248s 00:05:35.323 sys 0m0.797s 00:05:35.323 21:48:54 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.323 21:48:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:35.323 ************************************ 00:05:35.323 END TEST json_config_extra_key 00:05:35.323 ************************************ 00:05:35.323 21:48:54 -- common/autotest_common.sh@1142 -- # return 0 00:05:35.323 21:48:54 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:35.323 21:48:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.323 21:48:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.323 21:48:54 -- common/autotest_common.sh@10 -- # set +x 00:05:35.323 ************************************ 00:05:35.323 START TEST alias_rpc 00:05:35.323 ************************************ 00:05:35.323 21:48:54 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:35.323 * Looking for test storage... 00:05:35.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:35.323 21:48:54 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:35.323 21:48:54 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3937280 00:05:35.323 21:48:54 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.323 21:48:54 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3937280 00:05:35.323 21:48:54 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 3937280 ']' 00:05:35.323 21:48:54 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.323 21:48:54 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.323 21:48:54 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.323 21:48:54 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.323 21:48:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.583 [2024-07-13 21:48:54.775541] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:35.583 [2024-07-13 21:48:54.775695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3937280 ] 00:05:35.583 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.583 [2024-07-13 21:48:54.905200] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.840 [2024-07-13 21:48:55.161998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.775 21:48:56 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.775 21:48:56 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:36.775 21:48:56 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:37.034 21:48:56 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3937280 00:05:37.034 21:48:56 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 3937280 ']' 00:05:37.034 21:48:56 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 3937280 00:05:37.034 21:48:56 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:37.034 21:48:56 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:37.034 21:48:56 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3937280 00:05:37.034 21:48:56 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:37.034 21:48:56 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:37.034 21:48:56 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3937280' 00:05:37.034 killing process with pid 3937280 00:05:37.034 21:48:56 alias_rpc -- common/autotest_common.sh@967 -- # kill 3937280 00:05:37.034 21:48:56 alias_rpc -- common/autotest_common.sh@972 -- # wait 3937280 00:05:39.572 00:05:39.572 real 0m4.173s 00:05:39.572 user 0m4.255s 00:05:39.572 sys 0m0.628s 00:05:39.572 21:48:58 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.572 21:48:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.572 ************************************ 00:05:39.572 END TEST alias_rpc 00:05:39.572 ************************************ 00:05:39.572 21:48:58 -- common/autotest_common.sh@1142 -- # return 0 00:05:39.572 21:48:58 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:39.572 21:48:58 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:39.572 21:48:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.572 21:48:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.572 21:48:58 -- common/autotest_common.sh@10 -- # set +x 00:05:39.572 ************************************ 00:05:39.572 START TEST spdkcli_tcp 00:05:39.572 ************************************ 00:05:39.572 21:48:58 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:39.572 * Looking for test storage... 00:05:39.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:39.572 21:48:58 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:39.572 21:48:58 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:39.572 21:48:58 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:39.572 21:48:58 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:39.572 21:48:58 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:39.572 21:48:58 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:39.572 21:48:58 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:39.572 21:48:58 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:39.572 21:48:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:39.572 21:48:58 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3937860 00:05:39.572 21:48:58 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:39.572 21:48:58 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3937860 00:05:39.572 21:48:58 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 3937860 ']' 00:05:39.572 21:48:58 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.572 21:48:58 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.572 21:48:58 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.572 21:48:58 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.572 21:48:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:39.832 [2024-07-13 21:48:59.000679] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:39.832 [2024-07-13 21:48:59.000828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3937860 ] 00:05:39.832 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.832 [2024-07-13 21:48:59.129704] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.092 [2024-07-13 21:48:59.384352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.092 [2024-07-13 21:48:59.384359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.026 21:49:00 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.026 21:49:00 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:41.026 21:49:00 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3937998 00:05:41.026 21:49:00 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:41.026 21:49:00 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:41.286 [ 00:05:41.286 "bdev_malloc_delete", 00:05:41.286 "bdev_malloc_create", 00:05:41.286 "bdev_null_resize", 00:05:41.286 "bdev_null_delete", 00:05:41.286 "bdev_null_create", 00:05:41.286 "bdev_nvme_cuse_unregister", 00:05:41.286 "bdev_nvme_cuse_register", 00:05:41.286 "bdev_opal_new_user", 00:05:41.286 "bdev_opal_set_lock_state", 00:05:41.286 "bdev_opal_delete", 00:05:41.286 "bdev_opal_get_info", 00:05:41.286 "bdev_opal_create", 00:05:41.286 "bdev_nvme_opal_revert", 00:05:41.286 "bdev_nvme_opal_init", 00:05:41.286 "bdev_nvme_send_cmd", 00:05:41.286 "bdev_nvme_get_path_iostat", 00:05:41.286 "bdev_nvme_get_mdns_discovery_info", 00:05:41.286 "bdev_nvme_stop_mdns_discovery", 00:05:41.286 "bdev_nvme_start_mdns_discovery", 00:05:41.286 "bdev_nvme_set_multipath_policy", 00:05:41.286 "bdev_nvme_set_preferred_path", 00:05:41.286 "bdev_nvme_get_io_paths", 00:05:41.286 "bdev_nvme_remove_error_injection", 00:05:41.286 "bdev_nvme_add_error_injection", 00:05:41.286 "bdev_nvme_get_discovery_info", 00:05:41.286 "bdev_nvme_stop_discovery", 00:05:41.286 "bdev_nvme_start_discovery", 00:05:41.286 "bdev_nvme_get_controller_health_info", 00:05:41.286 "bdev_nvme_disable_controller", 00:05:41.286 "bdev_nvme_enable_controller", 00:05:41.286 "bdev_nvme_reset_controller", 00:05:41.286 "bdev_nvme_get_transport_statistics", 00:05:41.286 "bdev_nvme_apply_firmware", 00:05:41.286 "bdev_nvme_detach_controller", 00:05:41.286 "bdev_nvme_get_controllers", 00:05:41.286 "bdev_nvme_attach_controller", 00:05:41.286 "bdev_nvme_set_hotplug", 00:05:41.286 "bdev_nvme_set_options", 00:05:41.286 "bdev_passthru_delete", 00:05:41.286 "bdev_passthru_create", 00:05:41.286 "bdev_lvol_set_parent_bdev", 00:05:41.286 "bdev_lvol_set_parent", 00:05:41.286 "bdev_lvol_check_shallow_copy", 00:05:41.286 "bdev_lvol_start_shallow_copy", 00:05:41.286 "bdev_lvol_grow_lvstore", 00:05:41.286 "bdev_lvol_get_lvols", 00:05:41.286 "bdev_lvol_get_lvstores", 00:05:41.286 "bdev_lvol_delete", 00:05:41.286 "bdev_lvol_set_read_only", 00:05:41.286 "bdev_lvol_resize", 00:05:41.286 "bdev_lvol_decouple_parent", 00:05:41.286 "bdev_lvol_inflate", 00:05:41.286 "bdev_lvol_rename", 00:05:41.286 "bdev_lvol_clone_bdev", 00:05:41.286 "bdev_lvol_clone", 00:05:41.286 "bdev_lvol_snapshot", 00:05:41.286 "bdev_lvol_create", 00:05:41.286 "bdev_lvol_delete_lvstore", 00:05:41.286 "bdev_lvol_rename_lvstore", 00:05:41.286 "bdev_lvol_create_lvstore", 00:05:41.286 "bdev_raid_set_options", 00:05:41.286 "bdev_raid_remove_base_bdev", 00:05:41.286 "bdev_raid_add_base_bdev", 00:05:41.286 "bdev_raid_delete", 00:05:41.286 "bdev_raid_create", 00:05:41.286 "bdev_raid_get_bdevs", 00:05:41.286 "bdev_error_inject_error", 00:05:41.286 "bdev_error_delete", 00:05:41.286 "bdev_error_create", 00:05:41.286 "bdev_split_delete", 00:05:41.286 "bdev_split_create", 00:05:41.286 "bdev_delay_delete", 00:05:41.286 "bdev_delay_create", 00:05:41.286 "bdev_delay_update_latency", 00:05:41.286 "bdev_zone_block_delete", 00:05:41.286 "bdev_zone_block_create", 00:05:41.286 "blobfs_create", 00:05:41.286 "blobfs_detect", 00:05:41.286 "blobfs_set_cache_size", 00:05:41.286 "bdev_aio_delete", 00:05:41.286 "bdev_aio_rescan", 00:05:41.286 "bdev_aio_create", 00:05:41.286 "bdev_ftl_set_property", 00:05:41.286 "bdev_ftl_get_properties", 00:05:41.286 "bdev_ftl_get_stats", 00:05:41.286 "bdev_ftl_unmap", 00:05:41.286 "bdev_ftl_unload", 00:05:41.286 "bdev_ftl_delete", 00:05:41.286 "bdev_ftl_load", 00:05:41.286 "bdev_ftl_create", 00:05:41.286 "bdev_virtio_attach_controller", 00:05:41.286 "bdev_virtio_scsi_get_devices", 00:05:41.286 "bdev_virtio_detach_controller", 00:05:41.286 "bdev_virtio_blk_set_hotplug", 00:05:41.286 "bdev_iscsi_delete", 00:05:41.286 "bdev_iscsi_create", 00:05:41.286 "bdev_iscsi_set_options", 00:05:41.286 "accel_error_inject_error", 00:05:41.286 "ioat_scan_accel_module", 00:05:41.286 "dsa_scan_accel_module", 00:05:41.286 "iaa_scan_accel_module", 00:05:41.286 "keyring_file_remove_key", 00:05:41.286 "keyring_file_add_key", 00:05:41.286 "keyring_linux_set_options", 00:05:41.286 "iscsi_get_histogram", 00:05:41.286 "iscsi_enable_histogram", 00:05:41.286 "iscsi_set_options", 00:05:41.286 "iscsi_get_auth_groups", 00:05:41.286 "iscsi_auth_group_remove_secret", 00:05:41.286 "iscsi_auth_group_add_secret", 00:05:41.286 "iscsi_delete_auth_group", 00:05:41.286 "iscsi_create_auth_group", 00:05:41.286 "iscsi_set_discovery_auth", 00:05:41.286 "iscsi_get_options", 00:05:41.286 "iscsi_target_node_request_logout", 00:05:41.286 "iscsi_target_node_set_redirect", 00:05:41.286 "iscsi_target_node_set_auth", 00:05:41.286 "iscsi_target_node_add_lun", 00:05:41.286 "iscsi_get_stats", 00:05:41.286 "iscsi_get_connections", 00:05:41.286 "iscsi_portal_group_set_auth", 00:05:41.286 "iscsi_start_portal_group", 00:05:41.286 "iscsi_delete_portal_group", 00:05:41.286 "iscsi_create_portal_group", 00:05:41.286 "iscsi_get_portal_groups", 00:05:41.286 "iscsi_delete_target_node", 00:05:41.286 "iscsi_target_node_remove_pg_ig_maps", 00:05:41.286 "iscsi_target_node_add_pg_ig_maps", 00:05:41.286 "iscsi_create_target_node", 00:05:41.286 "iscsi_get_target_nodes", 00:05:41.286 "iscsi_delete_initiator_group", 00:05:41.286 "iscsi_initiator_group_remove_initiators", 00:05:41.286 "iscsi_initiator_group_add_initiators", 00:05:41.286 "iscsi_create_initiator_group", 00:05:41.286 "iscsi_get_initiator_groups", 00:05:41.286 "nvmf_set_crdt", 00:05:41.286 "nvmf_set_config", 00:05:41.286 "nvmf_set_max_subsystems", 00:05:41.286 "nvmf_stop_mdns_prr", 00:05:41.286 "nvmf_publish_mdns_prr", 00:05:41.286 "nvmf_subsystem_get_listeners", 00:05:41.286 "nvmf_subsystem_get_qpairs", 00:05:41.286 "nvmf_subsystem_get_controllers", 00:05:41.286 "nvmf_get_stats", 00:05:41.286 "nvmf_get_transports", 00:05:41.286 "nvmf_create_transport", 00:05:41.286 "nvmf_get_targets", 00:05:41.286 "nvmf_delete_target", 00:05:41.286 "nvmf_create_target", 00:05:41.286 "nvmf_subsystem_allow_any_host", 00:05:41.286 "nvmf_subsystem_remove_host", 00:05:41.286 "nvmf_subsystem_add_host", 00:05:41.286 "nvmf_ns_remove_host", 00:05:41.286 "nvmf_ns_add_host", 00:05:41.286 "nvmf_subsystem_remove_ns", 00:05:41.286 "nvmf_subsystem_add_ns", 00:05:41.286 "nvmf_subsystem_listener_set_ana_state", 00:05:41.286 "nvmf_discovery_get_referrals", 00:05:41.286 "nvmf_discovery_remove_referral", 00:05:41.286 "nvmf_discovery_add_referral", 00:05:41.286 "nvmf_subsystem_remove_listener", 00:05:41.286 "nvmf_subsystem_add_listener", 00:05:41.286 "nvmf_delete_subsystem", 00:05:41.286 "nvmf_create_subsystem", 00:05:41.286 "nvmf_get_subsystems", 00:05:41.286 "env_dpdk_get_mem_stats", 00:05:41.286 "nbd_get_disks", 00:05:41.286 "nbd_stop_disk", 00:05:41.286 "nbd_start_disk", 00:05:41.286 "ublk_recover_disk", 00:05:41.286 "ublk_get_disks", 00:05:41.286 "ublk_stop_disk", 00:05:41.286 "ublk_start_disk", 00:05:41.286 "ublk_destroy_target", 00:05:41.286 "ublk_create_target", 00:05:41.286 "virtio_blk_create_transport", 00:05:41.286 "virtio_blk_get_transports", 00:05:41.286 "vhost_controller_set_coalescing", 00:05:41.286 "vhost_get_controllers", 00:05:41.286 "vhost_delete_controller", 00:05:41.286 "vhost_create_blk_controller", 00:05:41.286 "vhost_scsi_controller_remove_target", 00:05:41.286 "vhost_scsi_controller_add_target", 00:05:41.286 "vhost_start_scsi_controller", 00:05:41.286 "vhost_create_scsi_controller", 00:05:41.286 "thread_set_cpumask", 00:05:41.286 "framework_get_governor", 00:05:41.286 "framework_get_scheduler", 00:05:41.286 "framework_set_scheduler", 00:05:41.286 "framework_get_reactors", 00:05:41.286 "thread_get_io_channels", 00:05:41.286 "thread_get_pollers", 00:05:41.286 "thread_get_stats", 00:05:41.287 "framework_monitor_context_switch", 00:05:41.287 "spdk_kill_instance", 00:05:41.287 "log_enable_timestamps", 00:05:41.287 "log_get_flags", 00:05:41.287 "log_clear_flag", 00:05:41.287 "log_set_flag", 00:05:41.287 "log_get_level", 00:05:41.287 "log_set_level", 00:05:41.287 "log_get_print_level", 00:05:41.287 "log_set_print_level", 00:05:41.287 "framework_enable_cpumask_locks", 00:05:41.287 "framework_disable_cpumask_locks", 00:05:41.287 "framework_wait_init", 00:05:41.287 "framework_start_init", 00:05:41.287 "scsi_get_devices", 00:05:41.287 "bdev_get_histogram", 00:05:41.287 "bdev_enable_histogram", 00:05:41.287 "bdev_set_qos_limit", 00:05:41.287 "bdev_set_qd_sampling_period", 00:05:41.287 "bdev_get_bdevs", 00:05:41.287 "bdev_reset_iostat", 00:05:41.287 "bdev_get_iostat", 00:05:41.287 "bdev_examine", 00:05:41.287 "bdev_wait_for_examine", 00:05:41.287 "bdev_set_options", 00:05:41.287 "notify_get_notifications", 00:05:41.287 "notify_get_types", 00:05:41.287 "accel_get_stats", 00:05:41.287 "accel_set_options", 00:05:41.287 "accel_set_driver", 00:05:41.287 "accel_crypto_key_destroy", 00:05:41.287 "accel_crypto_keys_get", 00:05:41.287 "accel_crypto_key_create", 00:05:41.287 "accel_assign_opc", 00:05:41.287 "accel_get_module_info", 00:05:41.287 "accel_get_opc_assignments", 00:05:41.287 "vmd_rescan", 00:05:41.287 "vmd_remove_device", 00:05:41.287 "vmd_enable", 00:05:41.287 "sock_get_default_impl", 00:05:41.287 "sock_set_default_impl", 00:05:41.287 "sock_impl_set_options", 00:05:41.287 "sock_impl_get_options", 00:05:41.287 "iobuf_get_stats", 00:05:41.287 "iobuf_set_options", 00:05:41.287 "framework_get_pci_devices", 00:05:41.287 "framework_get_config", 00:05:41.287 "framework_get_subsystems", 00:05:41.287 "trace_get_info", 00:05:41.287 "trace_get_tpoint_group_mask", 00:05:41.287 "trace_disable_tpoint_group", 00:05:41.287 "trace_enable_tpoint_group", 00:05:41.287 "trace_clear_tpoint_mask", 00:05:41.287 "trace_set_tpoint_mask", 00:05:41.287 "keyring_get_keys", 00:05:41.287 "spdk_get_version", 00:05:41.287 "rpc_get_methods" 00:05:41.287 ] 00:05:41.287 21:49:00 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:41.287 21:49:00 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:41.287 21:49:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.287 21:49:00 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:41.287 21:49:00 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3937860 00:05:41.287 21:49:00 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 3937860 ']' 00:05:41.287 21:49:00 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 3937860 00:05:41.287 21:49:00 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:41.287 21:49:00 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:41.287 21:49:00 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3937860 00:05:41.287 21:49:00 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:41.287 21:49:00 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:41.287 21:49:00 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3937860' 00:05:41.287 killing process with pid 3937860 00:05:41.287 21:49:00 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 3937860 00:05:41.287 21:49:00 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 3937860 00:05:43.819 00:05:43.819 real 0m4.179s 00:05:43.819 user 0m7.440s 00:05:43.819 sys 0m0.645s 00:05:43.819 21:49:03 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.819 21:49:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.819 ************************************ 00:05:43.819 END TEST spdkcli_tcp 00:05:43.819 ************************************ 00:05:43.819 21:49:03 -- common/autotest_common.sh@1142 -- # return 0 00:05:43.819 21:49:03 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:43.819 21:49:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.819 21:49:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.819 21:49:03 -- common/autotest_common.sh@10 -- # set +x 00:05:43.819 ************************************ 00:05:43.819 START TEST dpdk_mem_utility 00:05:43.819 ************************************ 00:05:43.819 21:49:03 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:43.819 * Looking for test storage... 00:05:43.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:43.819 21:49:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:43.819 21:49:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3938361 00:05:43.819 21:49:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.819 21:49:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3938361 00:05:43.819 21:49:03 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 3938361 ']' 00:05:43.819 21:49:03 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.819 21:49:03 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.819 21:49:03 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.819 21:49:03 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.819 21:49:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:44.076 [2024-07-13 21:49:03.232939] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:44.076 [2024-07-13 21:49:03.233104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3938361 ] 00:05:44.076 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.076 [2024-07-13 21:49:03.357429] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.335 [2024-07-13 21:49:03.609330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.272 21:49:04 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.272 21:49:04 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:45.272 21:49:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:45.272 21:49:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:45.272 21:49:04 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.272 21:49:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:45.272 { 00:05:45.272 "filename": "/tmp/spdk_mem_dump.txt" 00:05:45.272 } 00:05:45.272 21:49:04 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.272 21:49:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:45.272 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:45.272 1 heaps totaling size 820.000000 MiB 00:05:45.272 size: 820.000000 MiB heap id: 0 00:05:45.272 end heaps---------- 00:05:45.272 8 mempools totaling size 598.116089 MiB 00:05:45.272 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:45.272 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:45.272 size: 84.521057 MiB name: bdev_io_3938361 00:05:45.272 size: 51.011292 MiB name: evtpool_3938361 00:05:45.272 size: 50.003479 MiB name: msgpool_3938361 00:05:45.272 size: 21.763794 MiB name: PDU_Pool 00:05:45.272 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:45.272 size: 0.026123 MiB name: Session_Pool 00:05:45.272 end mempools------- 00:05:45.272 6 memzones totaling size 4.142822 MiB 00:05:45.272 size: 1.000366 MiB name: RG_ring_0_3938361 00:05:45.272 size: 1.000366 MiB name: RG_ring_1_3938361 00:05:45.272 size: 1.000366 MiB name: RG_ring_4_3938361 00:05:45.272 size: 1.000366 MiB name: RG_ring_5_3938361 00:05:45.272 size: 0.125366 MiB name: RG_ring_2_3938361 00:05:45.272 size: 0.015991 MiB name: RG_ring_3_3938361 00:05:45.272 end memzones------- 00:05:45.272 21:49:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:45.272 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:05:45.272 list of free elements. size: 18.514832 MiB 00:05:45.272 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:45.272 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:45.272 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:45.272 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:45.272 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:45.272 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:45.272 element at address: 0x200019600000 with size: 0.999329 MiB 00:05:45.272 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:45.272 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:45.272 element at address: 0x200018e00000 with size: 0.959900 MiB 00:05:45.272 element at address: 0x200019900040 with size: 0.937256 MiB 00:05:45.272 element at address: 0x200000200000 with size: 0.840942 MiB 00:05:45.272 element at address: 0x20001b000000 with size: 0.583191 MiB 00:05:45.272 element at address: 0x200019200000 with size: 0.491150 MiB 00:05:45.272 element at address: 0x200019a00000 with size: 0.485657 MiB 00:05:45.272 element at address: 0x200013800000 with size: 0.470581 MiB 00:05:45.272 element at address: 0x200028400000 with size: 0.411072 MiB 00:05:45.272 element at address: 0x200003a00000 with size: 0.356140 MiB 00:05:45.272 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:05:45.272 list of standard malloc elements. size: 199.220764 MiB 00:05:45.272 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:45.272 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:45.272 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:45.272 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:45.272 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:45.272 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:45.272 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:45.272 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:45.272 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:05:45.272 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:05:45.272 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:05:45.272 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:05:45.272 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:05:45.272 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:45.272 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:45.272 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:45.272 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:45.272 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:45.272 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:45.272 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:45.272 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:05:45.272 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:05:45.272 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:05:45.272 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:05:45.272 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:05:45.272 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:05:45.272 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:45.272 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:45.272 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:45.272 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:45.272 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:05:45.272 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:05:45.272 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:05:45.273 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:05:45.273 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:05:45.273 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:05:45.273 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:05:45.273 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:05:45.273 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:45.273 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:45.273 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:45.273 list of memzone associated elements. size: 602.264404 MiB 00:05:45.273 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:45.273 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:45.273 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:45.273 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:45.273 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:45.273 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3938361_0 00:05:45.273 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:45.273 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3938361_0 00:05:45.273 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:45.273 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3938361_0 00:05:45.273 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:45.273 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:45.273 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:45.273 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:45.273 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:45.273 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3938361 00:05:45.273 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:45.273 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3938361 00:05:45.273 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:45.273 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3938361 00:05:45.273 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:45.273 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:45.273 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:45.273 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:45.273 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:45.273 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:45.273 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:45.273 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:45.273 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:45.273 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3938361 00:05:45.273 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:45.273 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3938361 00:05:45.273 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:45.273 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3938361 00:05:45.273 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:45.273 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3938361 00:05:45.273 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:45.273 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3938361 00:05:45.273 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:05:45.273 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:45.273 element at address: 0x200013878780 with size: 0.500549 MiB 00:05:45.273 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:45.273 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:05:45.273 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:45.273 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:45.273 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3938361 00:05:45.273 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:05:45.273 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:45.273 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:05:45.273 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:45.273 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:45.273 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3938361 00:05:45.273 element at address: 0x20002846f540 with size: 0.002502 MiB 00:05:45.273 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:45.273 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:05:45.273 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3938361 00:05:45.273 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:45.273 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3938361 00:05:45.273 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:05:45.273 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:45.273 21:49:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:45.273 21:49:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3938361 00:05:45.273 21:49:04 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 3938361 ']' 00:05:45.273 21:49:04 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 3938361 00:05:45.273 21:49:04 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:45.273 21:49:04 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:45.273 21:49:04 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3938361 00:05:45.531 21:49:04 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:45.531 21:49:04 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:45.531 21:49:04 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3938361' 00:05:45.531 killing process with pid 3938361 00:05:45.531 21:49:04 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 3938361 00:05:45.531 21:49:04 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 3938361 00:05:48.092 00:05:48.092 real 0m4.119s 00:05:48.092 user 0m4.125s 00:05:48.092 sys 0m0.653s 00:05:48.092 21:49:07 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.092 21:49:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:48.092 ************************************ 00:05:48.092 END TEST dpdk_mem_utility 00:05:48.092 ************************************ 00:05:48.092 21:49:07 -- common/autotest_common.sh@1142 -- # return 0 00:05:48.092 21:49:07 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:48.092 21:49:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.092 21:49:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.092 21:49:07 -- common/autotest_common.sh@10 -- # set +x 00:05:48.092 ************************************ 00:05:48.092 START TEST event 00:05:48.092 ************************************ 00:05:48.092 21:49:07 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:48.092 * Looking for test storage... 00:05:48.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:48.092 21:49:07 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:48.092 21:49:07 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:48.092 21:49:07 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:48.092 21:49:07 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:48.092 21:49:07 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.092 21:49:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.092 ************************************ 00:05:48.092 START TEST event_perf 00:05:48.092 ************************************ 00:05:48.092 21:49:07 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:48.092 Running I/O for 1 seconds...[2024-07-13 21:49:07.352522] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:48.092 [2024-07-13 21:49:07.352673] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3938926 ] 00:05:48.092 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.092 [2024-07-13 21:49:07.481827] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:48.659 [2024-07-13 21:49:07.744993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.659 [2024-07-13 21:49:07.745054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.659 [2024-07-13 21:49:07.745112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.659 [2024-07-13 21:49:07.745121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.107 Running I/O for 1 seconds... 00:05:50.107 lcore 0: 192094 00:05:50.107 lcore 1: 192095 00:05:50.107 lcore 2: 192094 00:05:50.107 lcore 3: 192094 00:05:50.107 done. 00:05:50.107 00:05:50.107 real 0m1.890s 00:05:50.107 user 0m4.712s 00:05:50.107 sys 0m0.158s 00:05:50.107 21:49:09 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.107 21:49:09 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.107 ************************************ 00:05:50.107 END TEST event_perf 00:05:50.107 ************************************ 00:05:50.107 21:49:09 event -- common/autotest_common.sh@1142 -- # return 0 00:05:50.107 21:49:09 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:50.107 21:49:09 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:50.107 21:49:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.107 21:49:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.107 ************************************ 00:05:50.107 START TEST event_reactor 00:05:50.107 ************************************ 00:05:50.107 21:49:09 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:50.107 [2024-07-13 21:49:09.295380] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:50.107 [2024-07-13 21:49:09.295505] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3939211 ] 00:05:50.107 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.107 [2024-07-13 21:49:09.425924] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.368 [2024-07-13 21:49:09.687269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.746 test_start 00:05:51.746 oneshot 00:05:51.746 tick 100 00:05:51.746 tick 100 00:05:51.746 tick 250 00:05:51.746 tick 100 00:05:51.746 tick 100 00:05:51.746 tick 100 00:05:51.746 tick 250 00:05:51.746 tick 500 00:05:51.746 tick 100 00:05:51.746 tick 100 00:05:51.746 tick 250 00:05:51.746 tick 100 00:05:51.746 tick 100 00:05:51.746 test_end 00:05:52.004 00:05:52.004 real 0m1.884s 00:05:52.004 user 0m1.723s 00:05:52.004 sys 0m0.151s 00:05:52.004 21:49:11 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.004 21:49:11 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:52.004 ************************************ 00:05:52.004 END TEST event_reactor 00:05:52.004 ************************************ 00:05:52.005 21:49:11 event -- common/autotest_common.sh@1142 -- # return 0 00:05:52.005 21:49:11 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.005 21:49:11 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:52.005 21:49:11 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.005 21:49:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.005 ************************************ 00:05:52.005 START TEST event_reactor_perf 00:05:52.005 ************************************ 00:05:52.005 21:49:11 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:52.005 [2024-07-13 21:49:11.227945] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:52.005 [2024-07-13 21:49:11.228081] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3939492 ] 00:05:52.005 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.005 [2024-07-13 21:49:11.373442] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.275 [2024-07-13 21:49:11.640630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.176 test_start 00:05:54.176 test_end 00:05:54.176 Performance: 270460 events per second 00:05:54.176 00:05:54.176 real 0m1.898s 00:05:54.176 user 0m1.724s 00:05:54.176 sys 0m0.164s 00:05:54.176 21:49:13 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.176 21:49:13 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:54.176 ************************************ 00:05:54.176 END TEST event_reactor_perf 00:05:54.176 ************************************ 00:05:54.176 21:49:13 event -- common/autotest_common.sh@1142 -- # return 0 00:05:54.176 21:49:13 event -- event/event.sh@49 -- # uname -s 00:05:54.176 21:49:13 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:54.176 21:49:13 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:54.176 21:49:13 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.176 21:49:13 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.176 21:49:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.176 ************************************ 00:05:54.176 START TEST event_scheduler 00:05:54.176 ************************************ 00:05:54.176 21:49:13 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:54.176 * Looking for test storage... 00:05:54.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:54.176 21:49:13 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:54.176 21:49:13 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3939689 00:05:54.176 21:49:13 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:54.176 21:49:13 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:54.176 21:49:13 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3939689 00:05:54.176 21:49:13 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 3939689 ']' 00:05:54.176 21:49:13 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.176 21:49:13 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.176 21:49:13 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.176 21:49:13 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.176 21:49:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.176 [2024-07-13 21:49:13.266341] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:54.176 [2024-07-13 21:49:13.266485] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3939689 ] 00:05:54.176 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.176 [2024-07-13 21:49:13.400409] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:54.436 [2024-07-13 21:49:13.668679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.436 [2024-07-13 21:49:13.668733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.436 [2024-07-13 21:49:13.668787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.436 [2024-07-13 21:49:13.668791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.002 21:49:14 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.002 21:49:14 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:55.002 21:49:14 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:55.002 21:49:14 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.002 21:49:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:55.002 [2024-07-13 21:49:14.191440] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:55.002 [2024-07-13 21:49:14.191508] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:55.002 [2024-07-13 21:49:14.191544] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:55.002 [2024-07-13 21:49:14.191567] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:55.002 [2024-07-13 21:49:14.191584] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:55.002 21:49:14 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.002 21:49:14 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:55.002 21:49:14 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.002 21:49:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:55.260 [2024-07-13 21:49:14.490601] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:55.260 21:49:14 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.260 21:49:14 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:55.260 21:49:14 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.260 21:49:14 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.260 21:49:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:55.260 ************************************ 00:05:55.260 START TEST scheduler_create_thread 00:05:55.260 ************************************ 00:05:55.260 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:55.260 21:49:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.261 2 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.261 3 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.261 4 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.261 5 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.261 6 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.261 7 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.261 8 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.261 9 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.261 10 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:55.261 00:05:55.261 real 0m0.107s 00:05:55.261 user 0m0.013s 00:05:55.261 sys 0m0.003s 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.261 21:49:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.261 ************************************ 00:05:55.261 END TEST scheduler_create_thread 00:05:55.261 ************************************ 00:05:55.261 21:49:14 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:55.261 21:49:14 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:55.261 21:49:14 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3939689 00:05:55.261 21:49:14 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 3939689 ']' 00:05:55.261 21:49:14 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 3939689 00:05:55.261 21:49:14 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:55.261 21:49:14 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.261 21:49:14 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3939689 00:05:55.519 21:49:14 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:55.519 21:49:14 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:55.519 21:49:14 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3939689' 00:05:55.519 killing process with pid 3939689 00:05:55.519 21:49:14 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 3939689 00:05:55.519 21:49:14 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 3939689 00:05:55.778 [2024-07-13 21:49:15.109605] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:57.157 00:05:57.157 real 0m3.089s 00:05:57.157 user 0m4.816s 00:05:57.157 sys 0m0.467s 00:05:57.157 21:49:16 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.157 21:49:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.157 ************************************ 00:05:57.157 END TEST event_scheduler 00:05:57.157 ************************************ 00:05:57.157 21:49:16 event -- common/autotest_common.sh@1142 -- # return 0 00:05:57.157 21:49:16 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:57.157 21:49:16 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:57.157 21:49:16 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.157 21:49:16 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.157 21:49:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.157 ************************************ 00:05:57.157 START TEST app_repeat 00:05:57.157 ************************************ 00:05:57.157 21:49:16 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:57.157 21:49:16 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.157 21:49:16 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.157 21:49:16 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:57.157 21:49:16 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.157 21:49:16 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:57.157 21:49:16 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:57.157 21:49:16 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:57.157 21:49:16 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3940134 00:05:57.157 21:49:16 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:57.157 21:49:16 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.157 21:49:16 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3940134' 00:05:57.157 Process app_repeat pid: 3940134 00:05:57.157 21:49:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:57.157 21:49:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:57.157 spdk_app_start Round 0 00:05:57.157 21:49:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3940134 /var/tmp/spdk-nbd.sock 00:05:57.157 21:49:16 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3940134 ']' 00:05:57.157 21:49:16 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.157 21:49:16 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.157 21:49:16 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.157 21:49:16 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.157 21:49:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.157 [2024-07-13 21:49:16.332924] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:57.157 [2024-07-13 21:49:16.333075] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3940134 ] 00:05:57.157 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.157 [2024-07-13 21:49:16.465687] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.416 [2024-07-13 21:49:16.729693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.416 [2024-07-13 21:49:16.729697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.980 21:49:17 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.980 21:49:17 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:57.980 21:49:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.238 Malloc0 00:05:58.238 21:49:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.805 Malloc1 00:05:58.805 21:49:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.805 21:49:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.805 21:49:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.805 21:49:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.805 21:49:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.805 21:49:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.805 21:49:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.805 21:49:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.805 21:49:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.805 21:49:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.806 21:49:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.806 21:49:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.806 21:49:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:58.806 21:49:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.806 21:49:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.806 21:49:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:58.806 /dev/nbd0 00:05:59.064 21:49:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:59.064 21:49:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:59.064 21:49:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:59.064 21:49:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:59.064 21:49:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:59.064 21:49:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:59.064 21:49:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:59.064 21:49:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:59.064 21:49:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:59.064 21:49:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:59.064 21:49:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.065 1+0 records in 00:05:59.065 1+0 records out 00:05:59.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213126 s, 19.2 MB/s 00:05:59.065 21:49:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.065 21:49:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:59.065 21:49:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.065 21:49:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:59.065 21:49:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:59.065 21:49:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.065 21:49:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.065 21:49:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:59.324 /dev/nbd1 00:05:59.324 21:49:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:59.324 21:49:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:59.324 21:49:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:59.324 21:49:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:59.324 21:49:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:59.324 21:49:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:59.324 21:49:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:59.324 21:49:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:59.324 21:49:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:59.324 21:49:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:59.324 21:49:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.324 1+0 records in 00:05:59.324 1+0 records out 00:05:59.324 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229352 s, 17.9 MB/s 00:05:59.324 21:49:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.324 21:49:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:59.324 21:49:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.324 21:49:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:59.324 21:49:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:59.324 21:49:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.324 21:49:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.324 21:49:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.324 21:49:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.324 21:49:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.582 21:49:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:59.582 { 00:05:59.582 "nbd_device": "/dev/nbd0", 00:05:59.582 "bdev_name": "Malloc0" 00:05:59.582 }, 00:05:59.582 { 00:05:59.582 "nbd_device": "/dev/nbd1", 00:05:59.582 "bdev_name": "Malloc1" 00:05:59.582 } 00:05:59.582 ]' 00:05:59.582 21:49:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:59.582 { 00:05:59.582 "nbd_device": "/dev/nbd0", 00:05:59.582 "bdev_name": "Malloc0" 00:05:59.582 }, 00:05:59.582 { 00:05:59.582 "nbd_device": "/dev/nbd1", 00:05:59.582 "bdev_name": "Malloc1" 00:05:59.582 } 00:05:59.582 ]' 00:05:59.582 21:49:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.582 21:49:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:59.582 /dev/nbd1' 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:59.583 /dev/nbd1' 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:59.583 256+0 records in 00:05:59.583 256+0 records out 00:05:59.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00410948 s, 255 MB/s 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:59.583 256+0 records in 00:05:59.583 256+0 records out 00:05:59.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248708 s, 42.2 MB/s 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:59.583 256+0 records in 00:05:59.583 256+0 records out 00:05:59.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0283633 s, 37.0 MB/s 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.583 21:49:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.841 21:49:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.841 21:49:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.841 21:49:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.841 21:49:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.841 21:49:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.841 21:49:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.841 21:49:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.841 21:49:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.841 21:49:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.841 21:49:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:00.100 21:49:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:00.100 21:49:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:00.100 21:49:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:00.100 21:49:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.100 21:49:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.100 21:49:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:00.100 21:49:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.100 21:49:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.100 21:49:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.100 21:49:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.100 21:49:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.358 21:49:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.358 21:49:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.358 21:49:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.358 21:49:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.358 21:49:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.358 21:49:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.358 21:49:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:00.358 21:49:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.358 21:49:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.358 21:49:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:00.358 21:49:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:00.358 21:49:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:00.358 21:49:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:00.928 21:49:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:02.304 [2024-07-13 21:49:21.598743] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.563 [2024-07-13 21:49:21.862207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.563 [2024-07-13 21:49:21.862209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.820 [2024-07-13 21:49:22.083829] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:02.820 [2024-07-13 21:49:22.083938] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.200 21:49:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:04.200 21:49:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:04.200 spdk_app_start Round 1 00:06:04.200 21:49:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3940134 /var/tmp/spdk-nbd.sock 00:06:04.200 21:49:23 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3940134 ']' 00:06:04.200 21:49:23 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.200 21:49:23 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.200 21:49:23 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.200 21:49:23 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.200 21:49:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.200 21:49:23 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.200 21:49:23 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:04.200 21:49:23 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.477 Malloc0 00:06:04.477 21:49:23 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.737 Malloc1 00:06:04.737 21:49:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.737 21:49:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.737 21:49:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.737 21:49:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:04.737 21:49:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.737 21:49:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:04.737 21:49:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.737 21:49:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.737 21:49:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.737 21:49:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:04.737 21:49:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.737 21:49:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:04.737 21:49:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:04.737 21:49:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:04.737 21:49:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.737 21:49:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:04.993 /dev/nbd0 00:06:04.993 21:49:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:04.993 21:49:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:04.993 21:49:24 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:04.993 21:49:24 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:04.993 21:49:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:04.993 21:49:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:04.993 21:49:24 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:04.993 21:49:24 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:04.993 21:49:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:04.993 21:49:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:04.993 21:49:24 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.993 1+0 records in 00:06:04.993 1+0 records out 00:06:04.993 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230977 s, 17.7 MB/s 00:06:04.993 21:49:24 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.993 21:49:24 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:04.993 21:49:24 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.993 21:49:24 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:04.993 21:49:24 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:04.993 21:49:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.993 21:49:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.993 21:49:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.250 /dev/nbd1 00:06:05.250 21:49:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.250 21:49:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.250 21:49:24 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:05.250 21:49:24 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:05.250 21:49:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:05.250 21:49:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:05.250 21:49:24 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:05.250 21:49:24 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:05.250 21:49:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:05.250 21:49:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:05.250 21:49:24 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.250 1+0 records in 00:06:05.250 1+0 records out 00:06:05.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191257 s, 21.4 MB/s 00:06:05.250 21:49:24 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.250 21:49:24 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:05.250 21:49:24 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.250 21:49:24 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:05.250 21:49:24 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:05.250 21:49:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.250 21:49:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.250 21:49:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.250 21:49:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.250 21:49:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.508 21:49:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.508 { 00:06:05.508 "nbd_device": "/dev/nbd0", 00:06:05.508 "bdev_name": "Malloc0" 00:06:05.508 }, 00:06:05.508 { 00:06:05.508 "nbd_device": "/dev/nbd1", 00:06:05.508 "bdev_name": "Malloc1" 00:06:05.508 } 00:06:05.508 ]' 00:06:05.508 21:49:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.508 { 00:06:05.508 "nbd_device": "/dev/nbd0", 00:06:05.508 "bdev_name": "Malloc0" 00:06:05.508 }, 00:06:05.508 { 00:06:05.508 "nbd_device": "/dev/nbd1", 00:06:05.508 "bdev_name": "Malloc1" 00:06:05.508 } 00:06:05.508 ]' 00:06:05.508 21:49:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.508 21:49:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:05.508 /dev/nbd1' 00:06:05.508 21:49:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:05.508 /dev/nbd1' 00:06:05.508 21:49:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.508 21:49:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:05.508 21:49:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:05.508 21:49:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:05.508 21:49:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:05.508 21:49:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:05.508 21:49:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.508 21:49:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.508 21:49:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:05.508 21:49:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.508 21:49:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:05.508 21:49:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:05.766 256+0 records in 00:06:05.766 256+0 records out 00:06:05.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00506731 s, 207 MB/s 00:06:05.766 21:49:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.766 21:49:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:05.766 256+0 records in 00:06:05.766 256+0 records out 00:06:05.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243936 s, 43.0 MB/s 00:06:05.766 21:49:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.766 21:49:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:05.766 256+0 records in 00:06:05.766 256+0 records out 00:06:05.766 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299084 s, 35.1 MB/s 00:06:05.766 21:49:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:05.766 21:49:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.766 21:49:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.766 21:49:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:05.766 21:49:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.766 21:49:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:05.766 21:49:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:05.766 21:49:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.766 21:49:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:05.766 21:49:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.766 21:49:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:05.766 21:49:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.766 21:49:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:05.766 21:49:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.766 21:49:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.766 21:49:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.766 21:49:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:05.766 21:49:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.766 21:49:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.023 21:49:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.023 21:49:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.023 21:49:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.023 21:49:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.023 21:49:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.023 21:49:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.023 21:49:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.023 21:49:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.024 21:49:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.024 21:49:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.281 21:49:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.281 21:49:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.281 21:49:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.281 21:49:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.281 21:49:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.281 21:49:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.281 21:49:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.281 21:49:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.281 21:49:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.281 21:49:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.281 21:49:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.539 21:49:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.539 21:49:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.539 21:49:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.539 21:49:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.539 21:49:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.539 21:49:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.539 21:49:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:06.539 21:49:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.539 21:49:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.539 21:49:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.539 21:49:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.539 21:49:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.539 21:49:25 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:07.108 21:49:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:08.488 [2024-07-13 21:49:27.663903] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.747 [2024-07-13 21:49:27.920612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.747 [2024-07-13 21:49:27.920615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.747 [2024-07-13 21:49:28.139792] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:08.747 [2024-07-13 21:49:28.139915] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.126 21:49:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:10.126 21:49:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:10.126 spdk_app_start Round 2 00:06:10.126 21:49:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3940134 /var/tmp/spdk-nbd.sock 00:06:10.126 21:49:29 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3940134 ']' 00:06:10.126 21:49:29 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.127 21:49:29 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.127 21:49:29 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.127 21:49:29 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.127 21:49:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.127 21:49:29 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.127 21:49:29 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:10.127 21:49:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.695 Malloc0 00:06:10.695 21:49:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.954 Malloc1 00:06:10.954 21:49:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.954 21:49:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.954 21:49:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.954 21:49:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.954 21:49:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.954 21:49:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.954 21:49:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.954 21:49:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.954 21:49:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.954 21:49:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.954 21:49:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.954 21:49:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.954 21:49:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:10.954 21:49:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.954 21:49:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.954 21:49:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:11.213 /dev/nbd0 00:06:11.213 21:49:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:11.213 21:49:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:11.213 21:49:30 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:11.213 21:49:30 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:11.213 21:49:30 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:11.213 21:49:30 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:11.213 21:49:30 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:11.213 21:49:30 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:11.213 21:49:30 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:11.213 21:49:30 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:11.213 21:49:30 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.213 1+0 records in 00:06:11.213 1+0 records out 00:06:11.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238882 s, 17.1 MB/s 00:06:11.213 21:49:30 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.213 21:49:30 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:11.213 21:49:30 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.213 21:49:30 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:11.213 21:49:30 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:11.213 21:49:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.213 21:49:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.213 21:49:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:11.471 /dev/nbd1 00:06:11.471 21:49:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:11.471 21:49:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:11.471 21:49:30 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:11.471 21:49:30 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:11.471 21:49:30 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:11.471 21:49:30 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:11.471 21:49:30 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:11.471 21:49:30 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:11.471 21:49:30 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:11.471 21:49:30 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:11.471 21:49:30 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.471 1+0 records in 00:06:11.471 1+0 records out 00:06:11.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198253 s, 20.7 MB/s 00:06:11.471 21:49:30 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.471 21:49:30 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:11.471 21:49:30 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.471 21:49:30 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:11.471 21:49:30 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:11.471 21:49:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.471 21:49:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.471 21:49:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.471 21:49:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.471 21:49:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.730 21:49:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:11.730 { 00:06:11.730 "nbd_device": "/dev/nbd0", 00:06:11.730 "bdev_name": "Malloc0" 00:06:11.730 }, 00:06:11.730 { 00:06:11.730 "nbd_device": "/dev/nbd1", 00:06:11.730 "bdev_name": "Malloc1" 00:06:11.730 } 00:06:11.730 ]' 00:06:11.730 21:49:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:11.730 { 00:06:11.730 "nbd_device": "/dev/nbd0", 00:06:11.730 "bdev_name": "Malloc0" 00:06:11.730 }, 00:06:11.730 { 00:06:11.730 "nbd_device": "/dev/nbd1", 00:06:11.730 "bdev_name": "Malloc1" 00:06:11.730 } 00:06:11.730 ]' 00:06:11.730 21:49:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.730 21:49:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:11.730 /dev/nbd1' 00:06:11.730 21:49:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:11.730 /dev/nbd1' 00:06:11.730 21:49:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.730 21:49:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.730 21:49:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.730 21:49:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.730 21:49:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.730 21:49:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.730 21:49:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.730 21:49:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.730 21:49:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.730 21:49:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.730 21:49:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.730 21:49:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.730 256+0 records in 00:06:11.730 256+0 records out 00:06:11.730 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00390842 s, 268 MB/s 00:06:11.730 21:49:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.730 21:49:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.730 256+0 records in 00:06:11.730 256+0 records out 00:06:11.730 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247124 s, 42.4 MB/s 00:06:11.730 21:49:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.730 21:49:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.730 256+0 records in 00:06:11.730 256+0 records out 00:06:11.730 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293419 s, 35.7 MB/s 00:06:11.730 21:49:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.730 21:49:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.730 21:49:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.730 21:49:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.730 21:49:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.730 21:49:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.730 21:49:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.730 21:49:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.730 21:49:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.730 21:49:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.730 21:49:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.730 21:49:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.730 21:49:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.730 21:49:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.730 21:49:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.730 21:49:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.730 21:49:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:11.730 21:49:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.730 21:49:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.990 21:49:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.990 21:49:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.990 21:49:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.990 21:49:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.990 21:49:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.990 21:49:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.990 21:49:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.990 21:49:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.990 21:49:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.990 21:49:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.248 21:49:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:12.248 21:49:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:12.248 21:49:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:12.248 21:49:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.248 21:49:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.248 21:49:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:12.248 21:49:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:12.248 21:49:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.248 21:49:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.248 21:49:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.248 21:49:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.506 21:49:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:12.506 21:49:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:12.506 21:49:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.506 21:49:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:12.506 21:49:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:12.506 21:49:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.506 21:49:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:12.506 21:49:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:12.506 21:49:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:12.506 21:49:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:12.506 21:49:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:12.506 21:49:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:12.506 21:49:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:13.075 21:49:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:14.468 [2024-07-13 21:49:33.662453] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.727 [2024-07-13 21:49:33.920395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.727 [2024-07-13 21:49:33.920398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.985 [2024-07-13 21:49:34.142112] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:14.985 [2024-07-13 21:49:34.142198] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.920 21:49:35 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3940134 /var/tmp/spdk-nbd.sock 00:06:15.920 21:49:35 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3940134 ']' 00:06:15.920 21:49:35 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.920 21:49:35 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.920 21:49:35 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.920 21:49:35 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.920 21:49:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:16.178 21:49:35 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.178 21:49:35 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:16.178 21:49:35 event.app_repeat -- event/event.sh@39 -- # killprocess 3940134 00:06:16.178 21:49:35 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 3940134 ']' 00:06:16.178 21:49:35 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 3940134 00:06:16.178 21:49:35 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:16.178 21:49:35 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.178 21:49:35 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3940134 00:06:16.178 21:49:35 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:16.178 21:49:35 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:16.178 21:49:35 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3940134' 00:06:16.178 killing process with pid 3940134 00:06:16.178 21:49:35 event.app_repeat -- common/autotest_common.sh@967 -- # kill 3940134 00:06:16.178 21:49:35 event.app_repeat -- common/autotest_common.sh@972 -- # wait 3940134 00:06:17.571 spdk_app_start is called in Round 0. 00:06:17.571 Shutdown signal received, stop current app iteration 00:06:17.571 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:17.571 spdk_app_start is called in Round 1. 00:06:17.571 Shutdown signal received, stop current app iteration 00:06:17.571 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:17.571 spdk_app_start is called in Round 2. 00:06:17.571 Shutdown signal received, stop current app iteration 00:06:17.571 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:06:17.571 spdk_app_start is called in Round 3. 00:06:17.571 Shutdown signal received, stop current app iteration 00:06:17.571 21:49:36 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:17.571 21:49:36 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:17.571 00:06:17.571 real 0m20.539s 00:06:17.571 user 0m42.031s 00:06:17.571 sys 0m3.349s 00:06:17.571 21:49:36 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.571 21:49:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.571 ************************************ 00:06:17.571 END TEST app_repeat 00:06:17.571 ************************************ 00:06:17.571 21:49:36 event -- common/autotest_common.sh@1142 -- # return 0 00:06:17.571 21:49:36 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:17.571 21:49:36 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:17.571 21:49:36 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.571 21:49:36 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.571 21:49:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.571 ************************************ 00:06:17.571 START TEST cpu_locks 00:06:17.571 ************************************ 00:06:17.571 21:49:36 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:17.571 * Looking for test storage... 00:06:17.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:17.571 21:49:36 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:17.571 21:49:36 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:17.571 21:49:36 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:17.571 21:49:36 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:17.571 21:49:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.571 21:49:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.571 21:49:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.571 ************************************ 00:06:17.571 START TEST default_locks 00:06:17.571 ************************************ 00:06:17.571 21:49:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:17.571 21:49:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3942774 00:06:17.571 21:49:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.571 21:49:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3942774 00:06:17.571 21:49:36 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3942774 ']' 00:06:17.571 21:49:36 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.571 21:49:36 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.571 21:49:36 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.571 21:49:36 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.571 21:49:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.830 [2024-07-13 21:49:37.038521] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:17.830 [2024-07-13 21:49:37.038666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3942774 ] 00:06:17.830 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.830 [2024-07-13 21:49:37.171611] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.090 [2024-07-13 21:49:37.432527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.051 21:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.051 21:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:19.051 21:49:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3942774 00:06:19.051 21:49:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3942774 00:06:19.051 21:49:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.616 lslocks: write error 00:06:19.616 21:49:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3942774 00:06:19.616 21:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 3942774 ']' 00:06:19.616 21:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 3942774 00:06:19.616 21:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:19.616 21:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:19.616 21:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3942774 00:06:19.616 21:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:19.616 21:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:19.616 21:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3942774' 00:06:19.616 killing process with pid 3942774 00:06:19.616 21:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 3942774 00:06:19.616 21:49:38 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 3942774 00:06:22.153 21:49:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3942774 00:06:22.153 21:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:22.153 21:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3942774 00:06:22.153 21:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:22.153 21:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.153 21:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:22.153 21:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.153 21:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3942774 00:06:22.153 21:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3942774 ']' 00:06:22.153 21:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.153 21:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.153 21:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.153 21:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.153 21:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3942774) - No such process 00:06:22.153 ERROR: process (pid: 3942774) is no longer running 00:06:22.153 21:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.153 21:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:22.153 21:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:22.153 21:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:22.153 21:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:22.153 21:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:22.153 21:49:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:22.153 21:49:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:22.153 21:49:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:22.154 21:49:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:22.154 00:06:22.154 real 0m4.466s 00:06:22.154 user 0m4.420s 00:06:22.154 sys 0m0.769s 00:06:22.154 21:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.154 21:49:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.154 ************************************ 00:06:22.154 END TEST default_locks 00:06:22.154 ************************************ 00:06:22.154 21:49:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:22.154 21:49:41 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:22.154 21:49:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.154 21:49:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.154 21:49:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.154 ************************************ 00:06:22.154 START TEST default_locks_via_rpc 00:06:22.154 ************************************ 00:06:22.154 21:49:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:22.154 21:49:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3943332 00:06:22.154 21:49:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.154 21:49:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3943332 00:06:22.154 21:49:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3943332 ']' 00:06:22.154 21:49:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.154 21:49:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.154 21:49:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.154 21:49:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.154 21:49:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.413 [2024-07-13 21:49:41.573105] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:22.413 [2024-07-13 21:49:41.573289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3943332 ] 00:06:22.413 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.413 [2024-07-13 21:49:41.713429] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.674 [2024-07-13 21:49:41.971720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.610 21:49:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.610 21:49:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:23.610 21:49:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:23.610 21:49:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.610 21:49:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.610 21:49:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.610 21:49:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:23.610 21:49:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:23.610 21:49:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:23.610 21:49:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:23.610 21:49:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:23.610 21:49:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:23.610 21:49:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.610 21:49:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.610 21:49:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3943332 00:06:23.610 21:49:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3943332 00:06:23.610 21:49:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.870 21:49:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3943332 00:06:23.870 21:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 3943332 ']' 00:06:23.870 21:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 3943332 00:06:23.870 21:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:23.870 21:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:23.870 21:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3943332 00:06:23.870 21:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:23.870 21:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:23.870 21:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3943332' 00:06:23.870 killing process with pid 3943332 00:06:23.870 21:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 3943332 00:06:23.870 21:49:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 3943332 00:06:26.407 00:06:26.407 real 0m4.279s 00:06:26.407 user 0m4.226s 00:06:26.407 sys 0m0.806s 00:06:26.407 21:49:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.407 21:49:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.407 ************************************ 00:06:26.407 END TEST default_locks_via_rpc 00:06:26.407 ************************************ 00:06:26.407 21:49:45 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:26.407 21:49:45 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:26.407 21:49:45 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.407 21:49:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.407 21:49:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.407 ************************************ 00:06:26.407 START TEST non_locking_app_on_locked_coremask 00:06:26.407 ************************************ 00:06:26.407 21:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:26.407 21:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3943876 00:06:26.407 21:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.407 21:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3943876 /var/tmp/spdk.sock 00:06:26.407 21:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3943876 ']' 00:06:26.407 21:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.407 21:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.407 21:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.407 21:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.407 21:49:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.665 [2024-07-13 21:49:45.891297] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:26.666 [2024-07-13 21:49:45.891477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3943876 ] 00:06:26.666 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.666 [2024-07-13 21:49:46.020307] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.925 [2024-07-13 21:49:46.272480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.861 21:49:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.861 21:49:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:27.861 21:49:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3944056 00:06:27.861 21:49:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3944056 /var/tmp/spdk2.sock 00:06:27.861 21:49:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:27.861 21:49:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3944056 ']' 00:06:27.861 21:49:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.861 21:49:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.861 21:49:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.861 21:49:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.861 21:49:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.119 [2024-07-13 21:49:47.266370] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:28.119 [2024-07-13 21:49:47.266510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3944056 ] 00:06:28.119 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.119 [2024-07-13 21:49:47.441398] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.119 [2024-07-13 21:49:47.441462] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.686 [2024-07-13 21:49:47.971956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.587 21:49:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.587 21:49:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:30.587 21:49:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3943876 00:06:30.587 21:49:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.587 21:49:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3943876 00:06:31.154 lslocks: write error 00:06:31.154 21:49:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3943876 00:06:31.154 21:49:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3943876 ']' 00:06:31.154 21:49:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3943876 00:06:31.154 21:49:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:31.154 21:49:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:31.154 21:49:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3943876 00:06:31.154 21:49:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:31.154 21:49:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:31.154 21:49:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3943876' 00:06:31.154 killing process with pid 3943876 00:06:31.155 21:49:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3943876 00:06:31.155 21:49:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3943876 00:06:36.421 21:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3944056 00:06:36.421 21:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3944056 ']' 00:06:36.421 21:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3944056 00:06:36.421 21:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:36.421 21:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.421 21:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3944056 00:06:36.421 21:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:36.421 21:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:36.421 21:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3944056' 00:06:36.421 killing process with pid 3944056 00:06:36.421 21:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3944056 00:06:36.421 21:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3944056 00:06:38.983 00:06:38.983 real 0m12.222s 00:06:38.983 user 0m12.528s 00:06:38.983 sys 0m1.462s 00:06:38.983 21:49:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.983 21:49:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.983 ************************************ 00:06:38.983 END TEST non_locking_app_on_locked_coremask 00:06:38.983 ************************************ 00:06:38.983 21:49:58 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:38.983 21:49:58 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:38.983 21:49:58 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.983 21:49:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.983 21:49:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.983 ************************************ 00:06:38.983 START TEST locking_app_on_unlocked_coremask 00:06:38.983 ************************************ 00:06:38.983 21:49:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:38.983 21:49:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3945377 00:06:38.983 21:49:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:38.984 21:49:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3945377 /var/tmp/spdk.sock 00:06:38.984 21:49:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3945377 ']' 00:06:38.984 21:49:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.984 21:49:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.984 21:49:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.984 21:49:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.984 21:49:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.984 [2024-07-13 21:49:58.172219] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:38.984 [2024-07-13 21:49:58.172399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3945377 ] 00:06:38.984 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.984 [2024-07-13 21:49:58.315709] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:38.984 [2024-07-13 21:49:58.315767] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.244 [2024-07-13 21:49:58.577455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.178 21:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.178 21:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:40.178 21:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3945522 00:06:40.178 21:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:40.178 21:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3945522 /var/tmp/spdk2.sock 00:06:40.178 21:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3945522 ']' 00:06:40.178 21:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.178 21:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.178 21:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.178 21:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.178 21:49:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.178 [2024-07-13 21:49:59.558136] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:40.178 [2024-07-13 21:49:59.558318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3945522 ] 00:06:40.435 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.435 [2024-07-13 21:49:59.759395] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.000 [2024-07-13 21:50:00.281259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.900 21:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.900 21:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:42.900 21:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3945522 00:06:42.900 21:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3945522 00:06:42.900 21:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.466 lslocks: write error 00:06:43.466 21:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3945377 00:06:43.466 21:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3945377 ']' 00:06:43.466 21:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3945377 00:06:43.466 21:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:43.466 21:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.466 21:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3945377 00:06:43.466 21:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.466 21:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.466 21:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3945377' 00:06:43.466 killing process with pid 3945377 00:06:43.466 21:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3945377 00:06:43.466 21:50:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3945377 00:06:48.740 21:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3945522 00:06:48.740 21:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3945522 ']' 00:06:48.740 21:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3945522 00:06:48.740 21:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:48.740 21:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.740 21:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3945522 00:06:48.740 21:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:48.740 21:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:48.740 21:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3945522' 00:06:48.740 killing process with pid 3945522 00:06:48.740 21:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3945522 00:06:48.740 21:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3945522 00:06:51.266 00:06:51.266 real 0m12.262s 00:06:51.266 user 0m12.631s 00:06:51.266 sys 0m1.475s 00:06:51.266 21:50:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.266 21:50:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.266 ************************************ 00:06:51.266 END TEST locking_app_on_unlocked_coremask 00:06:51.266 ************************************ 00:06:51.266 21:50:10 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:51.266 21:50:10 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:51.266 21:50:10 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.266 21:50:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.266 21:50:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.266 ************************************ 00:06:51.266 START TEST locking_app_on_locked_coremask 00:06:51.266 ************************************ 00:06:51.266 21:50:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:51.266 21:50:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3946878 00:06:51.266 21:50:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3946878 /var/tmp/spdk.sock 00:06:51.266 21:50:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.266 21:50:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3946878 ']' 00:06:51.266 21:50:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.266 21:50:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.266 21:50:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.266 21:50:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.266 21:50:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.266 [2024-07-13 21:50:10.480527] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:51.266 [2024-07-13 21:50:10.480701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3946878 ] 00:06:51.266 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.266 [2024-07-13 21:50:10.629084] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.523 [2024-07-13 21:50:10.895973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.455 21:50:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.455 21:50:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:52.455 21:50:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3947017 00:06:52.455 21:50:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:52.455 21:50:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3947017 /var/tmp/spdk2.sock 00:06:52.455 21:50:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:52.455 21:50:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3947017 /var/tmp/spdk2.sock 00:06:52.455 21:50:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:52.455 21:50:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.455 21:50:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:52.455 21:50:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.455 21:50:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3947017 /var/tmp/spdk2.sock 00:06:52.455 21:50:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3947017 ']' 00:06:52.455 21:50:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.455 21:50:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.455 21:50:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.455 21:50:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.455 21:50:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.712 [2024-07-13 21:50:11.892840] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:52.712 [2024-07-13 21:50:11.893033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3947017 ] 00:06:52.712 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.712 [2024-07-13 21:50:12.096682] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3946878 has claimed it. 00:06:52.712 [2024-07-13 21:50:12.096773] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:53.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3947017) - No such process 00:06:53.276 ERROR: process (pid: 3947017) is no longer running 00:06:53.276 21:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.276 21:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:53.276 21:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:53.276 21:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:53.276 21:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:53.276 21:50:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:53.276 21:50:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3946878 00:06:53.276 21:50:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3946878 00:06:53.276 21:50:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.841 lslocks: write error 00:06:53.841 21:50:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3946878 00:06:53.841 21:50:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3946878 ']' 00:06:53.841 21:50:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3946878 00:06:53.841 21:50:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:53.841 21:50:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:53.841 21:50:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3946878 00:06:53.841 21:50:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:53.841 21:50:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:53.841 21:50:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3946878' 00:06:53.841 killing process with pid 3946878 00:06:53.841 21:50:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3946878 00:06:53.841 21:50:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3946878 00:06:56.366 00:06:56.366 real 0m5.199s 00:06:56.366 user 0m5.469s 00:06:56.366 sys 0m1.007s 00:06:56.366 21:50:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.366 21:50:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.366 ************************************ 00:06:56.366 END TEST locking_app_on_locked_coremask 00:06:56.366 ************************************ 00:06:56.366 21:50:15 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:56.366 21:50:15 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:56.366 21:50:15 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:56.366 21:50:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.366 21:50:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.366 ************************************ 00:06:56.366 START TEST locking_overlapped_coremask 00:06:56.366 ************************************ 00:06:56.366 21:50:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:56.366 21:50:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3947570 00:06:56.366 21:50:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:56.366 21:50:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3947570 /var/tmp/spdk.sock 00:06:56.366 21:50:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3947570 ']' 00:06:56.366 21:50:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.366 21:50:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.366 21:50:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.366 21:50:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.366 21:50:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.366 [2024-07-13 21:50:15.717093] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:56.366 [2024-07-13 21:50:15.717253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3947570 ] 00:06:56.625 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.625 [2024-07-13 21:50:15.840139] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:56.883 [2024-07-13 21:50:16.094665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.883 [2024-07-13 21:50:16.094709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.883 [2024-07-13 21:50:16.094719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.855 21:50:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.855 21:50:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:57.855 21:50:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3947712 00:06:57.855 21:50:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3947712 /var/tmp/spdk2.sock 00:06:57.855 21:50:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:57.855 21:50:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:57.855 21:50:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3947712 /var/tmp/spdk2.sock 00:06:57.855 21:50:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:57.855 21:50:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.855 21:50:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:57.855 21:50:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:57.855 21:50:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3947712 /var/tmp/spdk2.sock 00:06:57.855 21:50:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3947712 ']' 00:06:57.855 21:50:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.855 21:50:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.855 21:50:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.855 21:50:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.855 21:50:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.855 [2024-07-13 21:50:17.086517] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:57.855 [2024-07-13 21:50:17.086667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3947712 ] 00:06:57.855 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.114 [2024-07-13 21:50:17.259462] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3947570 has claimed it. 00:06:58.114 [2024-07-13 21:50:17.259539] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:58.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3947712) - No such process 00:06:58.372 ERROR: process (pid: 3947712) is no longer running 00:06:58.372 21:50:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.372 21:50:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:58.372 21:50:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:58.372 21:50:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:58.372 21:50:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:58.372 21:50:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:58.372 21:50:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:58.372 21:50:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:58.372 21:50:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:58.372 21:50:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:58.372 21:50:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3947570 00:06:58.372 21:50:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 3947570 ']' 00:06:58.372 21:50:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 3947570 00:06:58.372 21:50:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:58.372 21:50:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:58.372 21:50:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3947570 00:06:58.372 21:50:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:58.372 21:50:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:58.372 21:50:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3947570' 00:06:58.372 killing process with pid 3947570 00:06:58.372 21:50:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 3947570 00:06:58.372 21:50:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 3947570 00:07:00.904 00:07:00.904 real 0m4.401s 00:07:00.904 user 0m11.424s 00:07:00.904 sys 0m0.758s 00:07:00.904 21:50:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.904 21:50:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.904 ************************************ 00:07:00.904 END TEST locking_overlapped_coremask 00:07:00.904 ************************************ 00:07:00.904 21:50:20 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:00.904 21:50:20 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:00.904 21:50:20 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:00.904 21:50:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.904 21:50:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.904 ************************************ 00:07:00.904 START TEST locking_overlapped_coremask_via_rpc 00:07:00.904 ************************************ 00:07:00.904 21:50:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:00.904 21:50:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3948044 00:07:00.904 21:50:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:00.904 21:50:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3948044 /var/tmp/spdk.sock 00:07:00.904 21:50:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3948044 ']' 00:07:00.904 21:50:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.904 21:50:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.904 21:50:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.904 21:50:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.904 21:50:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.904 [2024-07-13 21:50:20.176440] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:00.904 [2024-07-13 21:50:20.176590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3948044 ] 00:07:00.904 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.162 [2024-07-13 21:50:20.310377] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:01.162 [2024-07-13 21:50:20.310431] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.421 [2024-07-13 21:50:20.585232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.421 [2024-07-13 21:50:20.585282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.421 [2024-07-13 21:50:20.585288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.356 21:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.356 21:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:02.356 21:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3948281 00:07:02.356 21:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:02.356 21:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3948281 /var/tmp/spdk2.sock 00:07:02.356 21:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3948281 ']' 00:07:02.356 21:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:02.356 21:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.356 21:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:02.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:02.356 21:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.356 21:50:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.356 [2024-07-13 21:50:21.479748] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:02.356 [2024-07-13 21:50:21.479918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3948281 ] 00:07:02.356 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.356 [2024-07-13 21:50:21.653535] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:02.356 [2024-07-13 21:50:21.653592] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:02.923 [2024-07-13 21:50:22.118394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.923 [2024-07-13 21:50:22.121921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.923 [2024-07-13 21:50:22.121927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:04.854 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.854 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:04.854 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:04.854 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.854 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.854 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.854 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:04.854 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:04.854 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:04.854 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:04.854 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.854 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:04.854 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.854 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:04.854 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.854 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.854 [2024-07-13 21:50:24.179046] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3948044 has claimed it. 00:07:04.854 request: 00:07:04.854 { 00:07:04.854 "method": "framework_enable_cpumask_locks", 00:07:04.854 "req_id": 1 00:07:04.854 } 00:07:04.854 Got JSON-RPC error response 00:07:04.854 response: 00:07:04.854 { 00:07:04.854 "code": -32603, 00:07:04.854 "message": "Failed to claim CPU core: 2" 00:07:04.854 } 00:07:04.854 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:04.854 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:04.854 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:04.854 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:04.854 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:04.854 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3948044 /var/tmp/spdk.sock 00:07:04.854 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3948044 ']' 00:07:04.854 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.855 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.855 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.855 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.855 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.112 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.112 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:05.112 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3948281 /var/tmp/spdk2.sock 00:07:05.112 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3948281 ']' 00:07:05.112 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.112 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.112 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.112 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.112 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.370 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.370 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:05.370 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:05.370 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:05.370 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:05.370 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:05.370 00:07:05.370 real 0m4.619s 00:07:05.370 user 0m1.524s 00:07:05.370 sys 0m0.245s 00:07:05.370 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.370 21:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.370 ************************************ 00:07:05.370 END TEST locking_overlapped_coremask_via_rpc 00:07:05.370 ************************************ 00:07:05.370 21:50:24 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:05.370 21:50:24 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:05.370 21:50:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3948044 ]] 00:07:05.370 21:50:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3948044 00:07:05.370 21:50:24 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3948044 ']' 00:07:05.370 21:50:24 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3948044 00:07:05.370 21:50:24 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:05.370 21:50:24 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:05.370 21:50:24 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3948044 00:07:05.370 21:50:24 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:05.370 21:50:24 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:05.370 21:50:24 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3948044' 00:07:05.370 killing process with pid 3948044 00:07:05.370 21:50:24 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3948044 00:07:05.370 21:50:24 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3948044 00:07:07.899 21:50:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3948281 ]] 00:07:07.899 21:50:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3948281 00:07:07.899 21:50:27 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3948281 ']' 00:07:07.899 21:50:27 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3948281 00:07:07.899 21:50:27 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:07.899 21:50:27 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:07.899 21:50:27 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3948281 00:07:07.899 21:50:27 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:07.899 21:50:27 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:07.899 21:50:27 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3948281' 00:07:07.899 killing process with pid 3948281 00:07:07.899 21:50:27 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3948281 00:07:07.899 21:50:27 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3948281 00:07:10.427 21:50:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:10.427 21:50:29 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:10.427 21:50:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3948044 ]] 00:07:10.427 21:50:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3948044 00:07:10.427 21:50:29 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3948044 ']' 00:07:10.427 21:50:29 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3948044 00:07:10.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3948044) - No such process 00:07:10.427 21:50:29 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3948044 is not found' 00:07:10.427 Process with pid 3948044 is not found 00:07:10.427 21:50:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3948281 ]] 00:07:10.427 21:50:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3948281 00:07:10.427 21:50:29 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3948281 ']' 00:07:10.427 21:50:29 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3948281 00:07:10.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3948281) - No such process 00:07:10.427 21:50:29 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3948281 is not found' 00:07:10.427 Process with pid 3948281 is not found 00:07:10.427 21:50:29 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:10.427 00:07:10.427 real 0m52.482s 00:07:10.427 user 1m26.837s 00:07:10.427 sys 0m7.785s 00:07:10.427 21:50:29 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.427 21:50:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.427 ************************************ 00:07:10.427 END TEST cpu_locks 00:07:10.427 ************************************ 00:07:10.428 21:50:29 event -- common/autotest_common.sh@1142 -- # return 0 00:07:10.428 00:07:10.428 real 1m22.131s 00:07:10.428 user 2m21.981s 00:07:10.428 sys 0m12.307s 00:07:10.428 21:50:29 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.428 21:50:29 event -- common/autotest_common.sh@10 -- # set +x 00:07:10.428 ************************************ 00:07:10.428 END TEST event 00:07:10.428 ************************************ 00:07:10.428 21:50:29 -- common/autotest_common.sh@1142 -- # return 0 00:07:10.428 21:50:29 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:10.428 21:50:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:10.428 21:50:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.428 21:50:29 -- common/autotest_common.sh@10 -- # set +x 00:07:10.428 ************************************ 00:07:10.428 START TEST thread 00:07:10.428 ************************************ 00:07:10.428 21:50:29 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:10.428 * Looking for test storage... 00:07:10.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:10.428 21:50:29 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:10.428 21:50:29 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:10.428 21:50:29 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.428 21:50:29 thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.428 ************************************ 00:07:10.428 START TEST thread_poller_perf 00:07:10.428 ************************************ 00:07:10.428 21:50:29 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:10.428 [2024-07-13 21:50:29.537150] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:10.428 [2024-07-13 21:50:29.537280] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3949314 ] 00:07:10.428 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.428 [2024-07-13 21:50:29.665212] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.686 [2024-07-13 21:50:29.919787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.686 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:12.059 ====================================== 00:07:12.059 busy:2715597140 (cyc) 00:07:12.059 total_run_count: 288000 00:07:12.059 tsc_hz: 2700000000 (cyc) 00:07:12.059 ====================================== 00:07:12.059 poller_cost: 9429 (cyc), 3492 (nsec) 00:07:12.059 00:07:12.059 real 0m1.848s 00:07:12.059 user 0m1.672s 00:07:12.059 sys 0m0.167s 00:07:12.059 21:50:31 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.059 21:50:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:12.059 ************************************ 00:07:12.059 END TEST thread_poller_perf 00:07:12.059 ************************************ 00:07:12.059 21:50:31 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:12.059 21:50:31 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:12.059 21:50:31 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:12.059 21:50:31 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.059 21:50:31 thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.059 ************************************ 00:07:12.059 START TEST thread_poller_perf 00:07:12.059 ************************************ 00:07:12.059 21:50:31 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:12.059 [2024-07-13 21:50:31.432469] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:12.059 [2024-07-13 21:50:31.432585] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3949491 ] 00:07:12.318 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.318 [2024-07-13 21:50:31.566520] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.576 [2024-07-13 21:50:31.822381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.576 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:13.955 ====================================== 00:07:13.955 busy:2704983124 (cyc) 00:07:13.955 total_run_count: 3668000 00:07:13.955 tsc_hz: 2700000000 (cyc) 00:07:13.955 ====================================== 00:07:13.955 poller_cost: 737 (cyc), 272 (nsec) 00:07:13.955 00:07:13.955 real 0m1.872s 00:07:13.955 user 0m1.704s 00:07:13.955 sys 0m0.159s 00:07:13.955 21:50:33 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.955 21:50:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:13.955 ************************************ 00:07:13.955 END TEST thread_poller_perf 00:07:13.955 ************************************ 00:07:13.955 21:50:33 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:13.955 21:50:33 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:13.955 00:07:13.955 real 0m3.864s 00:07:13.955 user 0m3.432s 00:07:13.955 sys 0m0.423s 00:07:13.955 21:50:33 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.955 21:50:33 thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.955 ************************************ 00:07:13.955 END TEST thread 00:07:13.955 ************************************ 00:07:13.955 21:50:33 -- common/autotest_common.sh@1142 -- # return 0 00:07:13.955 21:50:33 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:13.955 21:50:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.955 21:50:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.955 21:50:33 -- common/autotest_common.sh@10 -- # set +x 00:07:13.955 ************************************ 00:07:13.955 START TEST accel 00:07:13.955 ************************************ 00:07:13.955 21:50:33 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:14.217 * Looking for test storage... 00:07:14.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:14.217 21:50:33 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:14.217 21:50:33 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:14.217 21:50:33 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:14.217 21:50:33 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3949797 00:07:14.217 21:50:33 accel -- accel/accel.sh@63 -- # waitforlisten 3949797 00:07:14.217 21:50:33 accel -- common/autotest_common.sh@829 -- # '[' -z 3949797 ']' 00:07:14.217 21:50:33 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.217 21:50:33 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:14.217 21:50:33 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.217 21:50:33 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:14.217 21:50:33 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.217 21:50:33 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.217 21:50:33 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.217 21:50:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.217 21:50:33 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.217 21:50:33 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.217 21:50:33 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.217 21:50:33 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.217 21:50:33 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:14.217 21:50:33 accel -- accel/accel.sh@41 -- # jq -r . 00:07:14.217 [2024-07-13 21:50:33.475378] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:14.217 [2024-07-13 21:50:33.475546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3949797 ] 00:07:14.217 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.217 [2024-07-13 21:50:33.604489] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.475 [2024-07-13 21:50:33.857012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.442 21:50:34 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.442 21:50:34 accel -- common/autotest_common.sh@862 -- # return 0 00:07:15.442 21:50:34 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:15.442 21:50:34 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:15.442 21:50:34 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:15.442 21:50:34 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:15.442 21:50:34 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:15.442 21:50:34 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:15.442 21:50:34 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.442 21:50:34 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:15.442 21:50:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.442 21:50:34 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.442 21:50:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.442 21:50:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.442 21:50:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.442 21:50:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.442 21:50:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.442 21:50:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.442 21:50:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.442 21:50:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.442 21:50:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.442 21:50:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.442 21:50:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.442 21:50:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.442 21:50:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.442 21:50:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.442 21:50:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.442 21:50:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.442 21:50:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.442 21:50:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.443 21:50:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.443 21:50:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.443 21:50:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.443 21:50:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.443 21:50:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.443 21:50:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.443 21:50:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.443 21:50:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.443 21:50:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.443 21:50:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.443 21:50:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.443 21:50:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.443 21:50:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.443 21:50:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.443 21:50:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.443 21:50:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.443 21:50:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.443 21:50:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.443 21:50:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.443 21:50:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.443 21:50:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.443 21:50:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.443 21:50:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.443 21:50:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.443 21:50:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.443 21:50:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.443 21:50:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.443 21:50:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.443 21:50:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.443 21:50:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.443 21:50:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.443 21:50:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.443 21:50:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.443 21:50:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.443 21:50:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.443 21:50:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.443 21:50:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.443 21:50:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.443 21:50:34 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:15.443 21:50:34 accel -- accel/accel.sh@72 -- # IFS== 00:07:15.443 21:50:34 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:15.443 21:50:34 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:15.443 21:50:34 accel -- accel/accel.sh@75 -- # killprocess 3949797 00:07:15.443 21:50:34 accel -- common/autotest_common.sh@948 -- # '[' -z 3949797 ']' 00:07:15.443 21:50:34 accel -- common/autotest_common.sh@952 -- # kill -0 3949797 00:07:15.443 21:50:34 accel -- common/autotest_common.sh@953 -- # uname 00:07:15.443 21:50:34 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:15.443 21:50:34 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3949797 00:07:15.443 21:50:34 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:15.443 21:50:34 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:15.443 21:50:34 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3949797' 00:07:15.443 killing process with pid 3949797 00:07:15.443 21:50:34 accel -- common/autotest_common.sh@967 -- # kill 3949797 00:07:15.443 21:50:34 accel -- common/autotest_common.sh@972 -- # wait 3949797 00:07:17.973 21:50:37 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:17.973 21:50:37 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:17.973 21:50:37 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:17.973 21:50:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.973 21:50:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.973 21:50:37 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:17.973 21:50:37 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:17.973 21:50:37 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:17.973 21:50:37 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.973 21:50:37 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.973 21:50:37 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.973 21:50:37 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.973 21:50:37 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.973 21:50:37 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:17.973 21:50:37 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:18.232 21:50:37 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.232 21:50:37 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:18.232 21:50:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:18.232 21:50:37 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:18.232 21:50:37 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:18.232 21:50:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.232 21:50:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.232 ************************************ 00:07:18.232 START TEST accel_missing_filename 00:07:18.232 ************************************ 00:07:18.232 21:50:37 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:18.232 21:50:37 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:18.232 21:50:37 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:18.232 21:50:37 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:18.232 21:50:37 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.232 21:50:37 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:18.232 21:50:37 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.232 21:50:37 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:18.232 21:50:37 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:18.232 21:50:37 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:18.232 21:50:37 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.232 21:50:37 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.232 21:50:37 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.232 21:50:37 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.232 21:50:37 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.232 21:50:37 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:18.232 21:50:37 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:18.232 [2024-07-13 21:50:37.473514] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:18.233 [2024-07-13 21:50:37.473637] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3950364 ] 00:07:18.233 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.233 [2024-07-13 21:50:37.605012] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.491 [2024-07-13 21:50:37.860624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.749 [2024-07-13 21:50:38.090926] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:19.316 [2024-07-13 21:50:38.650016] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:19.882 A filename is required. 00:07:19.882 21:50:39 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:19.882 21:50:39 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.882 21:50:39 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:19.882 21:50:39 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:19.882 21:50:39 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:19.882 21:50:39 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.882 00:07:19.882 real 0m1.680s 00:07:19.882 user 0m1.453s 00:07:19.882 sys 0m0.254s 00:07:19.882 21:50:39 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.882 21:50:39 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:19.882 ************************************ 00:07:19.882 END TEST accel_missing_filename 00:07:19.882 ************************************ 00:07:19.882 21:50:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.882 21:50:39 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:19.883 21:50:39 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:19.883 21:50:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.883 21:50:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.883 ************************************ 00:07:19.883 START TEST accel_compress_verify 00:07:19.883 ************************************ 00:07:19.883 21:50:39 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:19.883 21:50:39 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:19.883 21:50:39 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:19.883 21:50:39 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:19.883 21:50:39 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.883 21:50:39 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:19.883 21:50:39 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.883 21:50:39 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:19.883 21:50:39 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:19.883 21:50:39 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:19.883 21:50:39 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.883 21:50:39 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.883 21:50:39 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.883 21:50:39 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.883 21:50:39 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.883 21:50:39 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:19.883 21:50:39 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:19.883 [2024-07-13 21:50:39.205781] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:19.883 [2024-07-13 21:50:39.205951] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3950528 ] 00:07:20.142 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.142 [2024-07-13 21:50:39.335928] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.401 [2024-07-13 21:50:39.602330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.660 [2024-07-13 21:50:39.837831] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.222 [2024-07-13 21:50:40.399203] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:21.481 00:07:21.481 Compression does not support the verify option, aborting. 00:07:21.481 21:50:40 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:21.481 21:50:40 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.481 21:50:40 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:21.481 21:50:40 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:21.481 21:50:40 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:21.481 21:50:40 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.481 00:07:21.481 real 0m1.693s 00:07:21.481 user 0m1.477s 00:07:21.481 sys 0m0.245s 00:07:21.481 21:50:40 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.481 21:50:40 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:21.481 ************************************ 00:07:21.481 END TEST accel_compress_verify 00:07:21.481 ************************************ 00:07:21.740 21:50:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.740 21:50:40 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:21.740 21:50:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:21.740 21:50:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.740 21:50:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.740 ************************************ 00:07:21.740 START TEST accel_wrong_workload 00:07:21.740 ************************************ 00:07:21.740 21:50:40 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:21.740 21:50:40 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:21.740 21:50:40 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:21.740 21:50:40 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:21.740 21:50:40 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.740 21:50:40 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:21.740 21:50:40 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.740 21:50:40 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:21.740 21:50:40 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:21.740 21:50:40 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:21.740 21:50:40 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.740 21:50:40 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.740 21:50:40 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.740 21:50:40 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.740 21:50:40 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.740 21:50:40 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:21.740 21:50:40 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:21.740 Unsupported workload type: foobar 00:07:21.740 [2024-07-13 21:50:40.938295] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:21.740 accel_perf options: 00:07:21.740 [-h help message] 00:07:21.740 [-q queue depth per core] 00:07:21.740 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:21.740 [-T number of threads per core 00:07:21.740 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:21.740 [-t time in seconds] 00:07:21.740 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:21.740 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:21.740 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:21.740 [-l for compress/decompress workloads, name of uncompressed input file 00:07:21.740 [-S for crc32c workload, use this seed value (default 0) 00:07:21.740 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:21.740 [-f for fill workload, use this BYTE value (default 255) 00:07:21.740 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:21.740 [-y verify result if this switch is on] 00:07:21.740 [-a tasks to allocate per core (default: same value as -q)] 00:07:21.740 Can be used to spread operations across a wider range of memory. 00:07:21.740 21:50:40 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:21.740 21:50:40 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.740 21:50:40 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:21.740 21:50:40 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.740 00:07:21.740 real 0m0.058s 00:07:21.740 user 0m0.058s 00:07:21.740 sys 0m0.037s 00:07:21.740 21:50:40 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.740 21:50:40 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:21.740 ************************************ 00:07:21.740 END TEST accel_wrong_workload 00:07:21.740 ************************************ 00:07:21.740 21:50:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.740 21:50:40 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:21.740 21:50:40 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:21.740 21:50:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.740 21:50:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.740 ************************************ 00:07:21.740 START TEST accel_negative_buffers 00:07:21.740 ************************************ 00:07:21.740 21:50:40 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:21.740 21:50:40 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:21.740 21:50:40 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:21.740 21:50:40 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:21.740 21:50:40 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.740 21:50:40 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:21.740 21:50:40 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.740 21:50:40 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:21.740 21:50:40 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:21.740 21:50:40 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:21.740 21:50:40 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.740 21:50:40 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.740 21:50:40 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.740 21:50:40 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.740 21:50:40 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.740 21:50:40 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:21.740 21:50:40 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:21.740 -x option must be non-negative. 00:07:21.740 [2024-07-13 21:50:41.037682] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:21.740 accel_perf options: 00:07:21.740 [-h help message] 00:07:21.740 [-q queue depth per core] 00:07:21.740 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:21.740 [-T number of threads per core 00:07:21.740 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:21.740 [-t time in seconds] 00:07:21.740 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:21.740 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:21.740 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:21.740 [-l for compress/decompress workloads, name of uncompressed input file 00:07:21.740 [-S for crc32c workload, use this seed value (default 0) 00:07:21.740 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:21.741 [-f for fill workload, use this BYTE value (default 255) 00:07:21.741 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:21.741 [-y verify result if this switch is on] 00:07:21.741 [-a tasks to allocate per core (default: same value as -q)] 00:07:21.741 Can be used to spread operations across a wider range of memory. 00:07:21.741 21:50:41 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:21.741 21:50:41 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.741 21:50:41 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:21.741 21:50:41 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.741 00:07:21.741 real 0m0.055s 00:07:21.741 user 0m0.062s 00:07:21.741 sys 0m0.029s 00:07:21.741 21:50:41 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.741 21:50:41 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:21.741 ************************************ 00:07:21.741 END TEST accel_negative_buffers 00:07:21.741 ************************************ 00:07:21.741 21:50:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.741 21:50:41 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:21.741 21:50:41 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:21.741 21:50:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.741 21:50:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.741 ************************************ 00:07:21.741 START TEST accel_crc32c 00:07:21.741 ************************************ 00:07:21.741 21:50:41 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:21.741 21:50:41 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:21.741 21:50:41 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:21.741 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:21.741 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:21.741 21:50:41 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:21.741 21:50:41 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:21.741 21:50:41 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:21.741 21:50:41 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.741 21:50:41 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.741 21:50:41 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.741 21:50:41 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.741 21:50:41 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.741 21:50:41 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:21.741 21:50:41 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:21.999 [2024-07-13 21:50:41.138084] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:21.999 [2024-07-13 21:50:41.138228] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3950849 ] 00:07:21.999 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.999 [2024-07-13 21:50:41.267753] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.258 [2024-07-13 21:50:41.528232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:22.516 21:50:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:24.415 21:50:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:24.415 21:50:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:24.415 21:50:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:24.415 21:50:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:24.415 21:50:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:24.415 21:50:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:24.415 21:50:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:24.415 21:50:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:24.415 21:50:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:24.415 21:50:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:24.415 21:50:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:24.415 21:50:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:24.415 21:50:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:24.415 21:50:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:24.415 21:50:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:24.415 21:50:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:24.415 21:50:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:24.415 21:50:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:24.415 21:50:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:24.415 21:50:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:24.415 21:50:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:24.415 21:50:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:24.415 21:50:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:24.415 21:50:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:24.415 21:50:43 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.415 21:50:43 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:24.415 21:50:43 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.415 00:07:24.415 real 0m2.686s 00:07:24.415 user 0m2.427s 00:07:24.415 sys 0m0.256s 00:07:24.415 21:50:43 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.415 21:50:43 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:24.415 ************************************ 00:07:24.415 END TEST accel_crc32c 00:07:24.415 ************************************ 00:07:24.415 21:50:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:24.415 21:50:43 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:24.415 21:50:43 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:24.415 21:50:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.415 21:50:43 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.673 ************************************ 00:07:24.673 START TEST accel_crc32c_C2 00:07:24.673 ************************************ 00:07:24.673 21:50:43 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:24.673 21:50:43 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:24.673 21:50:43 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:24.673 21:50:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:24.673 21:50:43 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:24.673 21:50:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:24.673 21:50:43 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:24.673 21:50:43 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.673 21:50:43 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.673 21:50:43 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.673 21:50:43 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.673 21:50:43 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.673 21:50:43 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.673 21:50:43 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:24.673 21:50:43 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:24.673 [2024-07-13 21:50:43.877549] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:24.673 [2024-07-13 21:50:43.877699] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3951145 ] 00:07:24.673 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.673 [2024-07-13 21:50:44.009311] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.932 [2024-07-13 21:50:44.270435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.190 21:50:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.721 00:07:27.721 real 0m2.695s 00:07:27.721 user 0m2.433s 00:07:27.721 sys 0m0.258s 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.721 21:50:46 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:27.721 ************************************ 00:07:27.721 END TEST accel_crc32c_C2 00:07:27.721 ************************************ 00:07:27.721 21:50:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:27.721 21:50:46 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:27.721 21:50:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:27.721 21:50:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.721 21:50:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.721 ************************************ 00:07:27.721 START TEST accel_copy 00:07:27.721 ************************************ 00:07:27.721 21:50:46 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:27.721 21:50:46 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:27.721 21:50:46 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:27.722 21:50:46 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.722 21:50:46 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:27.722 21:50:46 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.722 21:50:46 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:27.722 21:50:46 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:27.722 21:50:46 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.722 21:50:46 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.722 21:50:46 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.722 21:50:46 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.722 21:50:46 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.722 21:50:46 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:27.722 21:50:46 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:27.722 [2024-07-13 21:50:46.616003] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:27.722 [2024-07-13 21:50:46.616140] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3951545 ] 00:07:27.722 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.722 [2024-07-13 21:50:46.746976] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.722 [2024-07-13 21:50:47.005551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.980 21:50:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.881 21:50:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:29.881 21:50:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.881 21:50:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.881 21:50:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.881 21:50:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:29.881 21:50:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.881 21:50:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.881 21:50:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.881 21:50:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:29.881 21:50:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.881 21:50:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.881 21:50:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.881 21:50:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:29.881 21:50:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.881 21:50:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.881 21:50:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.881 21:50:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:29.881 21:50:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.881 21:50:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.881 21:50:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.881 21:50:49 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:29.881 21:50:49 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.881 21:50:49 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.881 21:50:49 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.881 21:50:49 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.881 21:50:49 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:29.881 21:50:49 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.881 00:07:29.881 real 0m2.685s 00:07:29.881 user 0m2.437s 00:07:29.881 sys 0m0.245s 00:07:29.881 21:50:49 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.881 21:50:49 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:29.881 ************************************ 00:07:29.881 END TEST accel_copy 00:07:29.882 ************************************ 00:07:30.140 21:50:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:30.140 21:50:49 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:30.140 21:50:49 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:30.140 21:50:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.140 21:50:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.140 ************************************ 00:07:30.140 START TEST accel_fill 00:07:30.140 ************************************ 00:07:30.140 21:50:49 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:30.140 21:50:49 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:30.140 21:50:49 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:30.140 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.140 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.140 21:50:49 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:30.140 21:50:49 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:30.140 21:50:49 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:30.140 21:50:49 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.140 21:50:49 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.140 21:50:49 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.140 21:50:49 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.140 21:50:49 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.140 21:50:49 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:30.140 21:50:49 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:30.140 [2024-07-13 21:50:49.348204] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:30.140 [2024-07-13 21:50:49.348324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3951840 ] 00:07:30.140 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.140 [2024-07-13 21:50:49.470707] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.398 [2024-07-13 21:50:49.733723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.657 21:50:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.591 21:50:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.591 21:50:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.591 21:50:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.591 21:50:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.591 21:50:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.591 21:50:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.591 21:50:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.591 21:50:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.591 21:50:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.591 21:50:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.591 21:50:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.591 21:50:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.591 21:50:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.591 21:50:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.591 21:50:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.591 21:50:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.591 21:50:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.848 21:50:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.848 21:50:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.848 21:50:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.848 21:50:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.848 21:50:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.848 21:50:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.848 21:50:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.848 21:50:51 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.848 21:50:51 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:32.848 21:50:51 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.848 00:07:32.848 real 0m2.691s 00:07:32.848 user 0m2.460s 00:07:32.848 sys 0m0.228s 00:07:32.848 21:50:51 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.848 21:50:51 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:32.848 ************************************ 00:07:32.848 END TEST accel_fill 00:07:32.848 ************************************ 00:07:32.848 21:50:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:32.848 21:50:52 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:32.848 21:50:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:32.848 21:50:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.848 21:50:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.848 ************************************ 00:07:32.848 START TEST accel_copy_crc32c 00:07:32.848 ************************************ 00:07:32.848 21:50:52 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:32.848 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:32.848 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:32.848 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.848 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:32.848 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.848 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:32.848 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:32.848 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.848 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.848 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.848 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.848 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.848 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:32.848 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:32.848 [2024-07-13 21:50:52.085783] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:32.848 [2024-07-13 21:50:52.085941] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3952252 ] 00:07:32.848 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.848 [2024-07-13 21:50:52.215934] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.105 [2024-07-13 21:50:52.477858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.361 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:33.361 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.361 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.361 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.361 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:33.361 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.361 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.361 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.361 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:33.361 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.361 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.361 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.361 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:33.361 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.361 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.361 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.361 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:33.361 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.361 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.361 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.362 21:50:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.884 00:07:35.884 real 0m2.697s 00:07:35.884 user 0m2.454s 00:07:35.884 sys 0m0.241s 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.884 21:50:54 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:35.884 ************************************ 00:07:35.884 END TEST accel_copy_crc32c 00:07:35.884 ************************************ 00:07:35.884 21:50:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.884 21:50:54 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:35.884 21:50:54 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:35.884 21:50:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.884 21:50:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.884 ************************************ 00:07:35.884 START TEST accel_copy_crc32c_C2 00:07:35.884 ************************************ 00:07:35.884 21:50:54 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:35.884 21:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:35.884 21:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:35.884 21:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.885 21:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:35.885 21:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.885 21:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:35.885 21:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.885 21:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.885 21:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.885 21:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.885 21:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.885 21:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.885 21:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:35.885 21:50:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:35.885 [2024-07-13 21:50:54.832981] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:35.885 [2024-07-13 21:50:54.833095] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3952543 ] 00:07:35.885 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.885 [2024-07-13 21:50:54.963459] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.885 [2024-07-13 21:50:55.225606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:36.143 21:50:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.673 00:07:38.673 real 0m2.702s 00:07:38.673 user 0m2.449s 00:07:38.673 sys 0m0.250s 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.673 21:50:57 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:38.673 ************************************ 00:07:38.673 END TEST accel_copy_crc32c_C2 00:07:38.673 ************************************ 00:07:38.673 21:50:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:38.673 21:50:57 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:38.673 21:50:57 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:38.673 21:50:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.673 21:50:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.673 ************************************ 00:07:38.673 START TEST accel_dualcast 00:07:38.673 ************************************ 00:07:38.673 21:50:57 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:38.673 21:50:57 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:38.673 21:50:57 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:38.674 21:50:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.674 21:50:57 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:38.674 21:50:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.674 21:50:57 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:38.674 21:50:57 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:38.674 21:50:57 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.674 21:50:57 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.674 21:50:57 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.674 21:50:57 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.674 21:50:57 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.674 21:50:57 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:38.674 21:50:57 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:38.674 [2024-07-13 21:50:57.583010] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:38.674 [2024-07-13 21:50:57.583141] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3952891 ] 00:07:38.674 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.674 [2024-07-13 21:50:57.714447] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.674 [2024-07-13 21:50:57.974871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:38.933 21:50:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:41.465 21:51:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:41.465 21:51:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:41.465 21:51:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:41.465 21:51:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:41.465 21:51:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:41.465 21:51:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:41.465 21:51:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:41.465 21:51:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:41.465 21:51:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:41.465 21:51:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:41.465 21:51:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:41.465 21:51:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:41.465 21:51:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:41.465 21:51:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:41.465 21:51:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:41.465 21:51:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:41.465 21:51:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:41.465 21:51:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:41.465 21:51:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:41.465 21:51:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:41.465 21:51:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:41.465 21:51:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:41.465 21:51:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:41.465 21:51:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:41.465 21:51:00 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.465 21:51:00 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:41.465 21:51:00 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.465 00:07:41.465 real 0m2.712s 00:07:41.465 user 0m0.010s 00:07:41.465 sys 0m0.002s 00:07:41.465 21:51:00 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.465 21:51:00 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:41.465 ************************************ 00:07:41.465 END TEST accel_dualcast 00:07:41.465 ************************************ 00:07:41.465 21:51:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:41.465 21:51:00 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:41.465 21:51:00 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:41.465 21:51:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.465 21:51:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.465 ************************************ 00:07:41.465 START TEST accel_compare 00:07:41.465 ************************************ 00:07:41.465 21:51:00 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:41.465 21:51:00 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:41.465 21:51:00 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:41.465 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:41.465 21:51:00 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:41.465 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:41.465 21:51:00 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:41.465 21:51:00 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:41.465 21:51:00 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.465 21:51:00 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.465 21:51:00 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.465 21:51:00 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.465 21:51:00 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.465 21:51:00 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:41.465 21:51:00 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:41.465 [2024-07-13 21:51:00.335296] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:41.465 [2024-07-13 21:51:00.335418] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3953272 ] 00:07:41.465 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.465 [2024-07-13 21:51:00.466185] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.465 [2024-07-13 21:51:00.723999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:41.724 21:51:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:43.626 21:51:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:43.626 21:51:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:43.626 21:51:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:43.626 21:51:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:43.627 21:51:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:43.627 21:51:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:43.627 21:51:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:43.627 21:51:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:43.627 21:51:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:43.627 21:51:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:43.627 21:51:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:43.627 21:51:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:43.627 21:51:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:43.627 21:51:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:43.627 21:51:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:43.627 21:51:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:43.627 21:51:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:43.627 21:51:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:43.627 21:51:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:43.627 21:51:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:43.627 21:51:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:43.627 21:51:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:43.627 21:51:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:43.627 21:51:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:43.627 21:51:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.627 21:51:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:43.627 21:51:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.627 00:07:43.627 real 0m2.683s 00:07:43.627 user 0m2.445s 00:07:43.627 sys 0m0.232s 00:07:43.627 21:51:02 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.627 21:51:02 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:43.627 ************************************ 00:07:43.627 END TEST accel_compare 00:07:43.627 ************************************ 00:07:43.627 21:51:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:43.627 21:51:03 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:43.627 21:51:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:43.627 21:51:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.627 21:51:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.885 ************************************ 00:07:43.885 START TEST accel_xor 00:07:43.885 ************************************ 00:07:43.885 21:51:03 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:43.885 21:51:03 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:43.885 21:51:03 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:43.885 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:43.885 21:51:03 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:43.885 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:43.885 21:51:03 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:43.885 21:51:03 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:43.885 21:51:03 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.885 21:51:03 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.885 21:51:03 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.885 21:51:03 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.885 21:51:03 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.885 21:51:03 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:43.885 21:51:03 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:43.885 [2024-07-13 21:51:03.065626] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:43.885 [2024-07-13 21:51:03.065756] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3953648 ] 00:07:43.885 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.885 [2024-07-13 21:51:03.199718] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.142 [2024-07-13 21:51:03.461346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:44.400 21:51:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.930 00:07:46.930 real 0m2.699s 00:07:46.930 user 0m2.444s 00:07:46.930 sys 0m0.251s 00:07:46.930 21:51:05 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.930 21:51:05 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:46.930 ************************************ 00:07:46.930 END TEST accel_xor 00:07:46.930 ************************************ 00:07:46.930 21:51:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:46.930 21:51:05 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:46.930 21:51:05 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:46.930 21:51:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.930 21:51:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:46.930 ************************************ 00:07:46.930 START TEST accel_xor 00:07:46.930 ************************************ 00:07:46.930 21:51:05 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:46.930 21:51:05 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:46.930 [2024-07-13 21:51:05.812228] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:46.930 [2024-07-13 21:51:05.812368] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3954052 ] 00:07:46.930 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.930 [2024-07-13 21:51:05.946539] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.930 [2024-07-13 21:51:06.208598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:47.189 21:51:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:49.092 21:51:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:49.092 21:51:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:49.092 21:51:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:49.092 21:51:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:49.092 21:51:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:49.092 21:51:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:49.092 21:51:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:49.092 21:51:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:49.092 21:51:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:49.092 21:51:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:49.092 21:51:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:49.092 21:51:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:49.092 21:51:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:49.092 21:51:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:49.092 21:51:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:49.092 21:51:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:49.092 21:51:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:49.092 21:51:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:49.092 21:51:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:49.092 21:51:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:49.092 21:51:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:49.092 21:51:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:49.092 21:51:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:49.092 21:51:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:49.092 21:51:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.092 21:51:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:49.092 21:51:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.092 00:07:49.092 real 0m2.706s 00:07:49.092 user 0m2.450s 00:07:49.092 sys 0m0.251s 00:07:49.092 21:51:08 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.092 21:51:08 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:49.092 ************************************ 00:07:49.092 END TEST accel_xor 00:07:49.092 ************************************ 00:07:49.361 21:51:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:49.361 21:51:08 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:49.361 21:51:08 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:49.361 21:51:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.361 21:51:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.361 ************************************ 00:07:49.361 START TEST accel_dif_verify 00:07:49.361 ************************************ 00:07:49.361 21:51:08 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:49.361 21:51:08 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:49.361 21:51:08 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:49.361 21:51:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:49.361 21:51:08 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:49.361 21:51:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:49.361 21:51:08 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:49.361 21:51:08 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:49.361 21:51:08 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.361 21:51:08 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.361 21:51:08 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.361 21:51:08 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.361 21:51:08 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.361 21:51:08 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:49.361 21:51:08 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:49.361 [2024-07-13 21:51:08.565211] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:49.361 [2024-07-13 21:51:08.565334] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3954710 ] 00:07:49.361 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.361 [2024-07-13 21:51:08.695023] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.688 [2024-07-13 21:51:08.960628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:49.945 21:51:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:49.946 21:51:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:51.847 21:51:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:51.847 21:51:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:51.847 21:51:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:51.847 21:51:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:51.847 21:51:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:51.847 21:51:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:51.847 21:51:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:51.847 21:51:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:51.847 21:51:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:51.847 21:51:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:51.847 21:51:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:51.847 21:51:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:51.847 21:51:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:51.847 21:51:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:51.847 21:51:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:51.847 21:51:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:51.847 21:51:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:51.847 21:51:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:51.847 21:51:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:51.847 21:51:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:51.847 21:51:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:51.847 21:51:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:51.847 21:51:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:51.847 21:51:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:51.847 21:51:11 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:51.847 21:51:11 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:51.847 21:51:11 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.847 00:07:51.847 real 0m2.689s 00:07:51.847 user 0m0.011s 00:07:51.847 sys 0m0.002s 00:07:51.847 21:51:11 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.847 21:51:11 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:51.847 ************************************ 00:07:51.847 END TEST accel_dif_verify 00:07:51.847 ************************************ 00:07:51.847 21:51:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:51.847 21:51:11 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:51.847 21:51:11 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:51.847 21:51:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.847 21:51:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.106 ************************************ 00:07:52.106 START TEST accel_dif_generate 00:07:52.106 ************************************ 00:07:52.106 21:51:11 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:52.106 21:51:11 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:52.106 21:51:11 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:52.106 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:52.106 21:51:11 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:52.106 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:52.106 21:51:11 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:52.106 21:51:11 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:52.106 21:51:11 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.106 21:51:11 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.106 21:51:11 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.106 21:51:11 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.106 21:51:11 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.106 21:51:11 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:52.106 21:51:11 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:52.106 [2024-07-13 21:51:11.301760] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:52.106 [2024-07-13 21:51:11.301909] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3955258 ] 00:07:52.106 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.106 [2024-07-13 21:51:11.432164] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.364 [2024-07-13 21:51:11.690859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:52.623 21:51:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:55.151 21:51:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:55.151 21:51:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:55.151 21:51:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:55.151 21:51:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:55.151 21:51:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:55.151 21:51:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:55.151 21:51:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:55.151 21:51:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:55.151 21:51:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:55.151 21:51:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:55.151 21:51:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:55.151 21:51:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:55.151 21:51:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:55.151 21:51:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:55.151 21:51:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:55.151 21:51:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:55.151 21:51:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:55.151 21:51:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:55.151 21:51:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:55.151 21:51:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:55.151 21:51:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:55.151 21:51:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:55.151 21:51:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:55.151 21:51:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:55.151 21:51:13 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:55.151 21:51:13 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:55.151 21:51:13 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.151 00:07:55.151 real 0m2.692s 00:07:55.151 user 0m0.011s 00:07:55.151 sys 0m0.002s 00:07:55.151 21:51:13 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.151 21:51:13 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:55.151 ************************************ 00:07:55.151 END TEST accel_dif_generate 00:07:55.151 ************************************ 00:07:55.151 21:51:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:55.151 21:51:13 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:55.151 21:51:13 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:55.151 21:51:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.151 21:51:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:55.151 ************************************ 00:07:55.151 START TEST accel_dif_generate_copy 00:07:55.151 ************************************ 00:07:55.151 21:51:13 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:55.151 21:51:13 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:55.151 21:51:13 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:55.151 21:51:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.151 21:51:13 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:55.151 21:51:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.151 21:51:13 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:55.151 21:51:13 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:55.151 21:51:13 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:55.151 21:51:13 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:55.151 21:51:13 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.151 21:51:13 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.151 21:51:13 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:55.151 21:51:13 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:55.151 21:51:13 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:55.151 [2024-07-13 21:51:14.037690] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:55.151 [2024-07-13 21:51:14.037807] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3955558 ] 00:07:55.151 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.151 [2024-07-13 21:51:14.167032] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.151 [2024-07-13 21:51:14.427507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.409 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:55.410 21:51:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.310 00:07:57.310 real 0m2.688s 00:07:57.310 user 0m2.450s 00:07:57.310 sys 0m0.234s 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.310 21:51:16 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:57.310 ************************************ 00:07:57.310 END TEST accel_dif_generate_copy 00:07:57.310 ************************************ 00:07:57.575 21:51:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:57.575 21:51:16 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:57.575 21:51:16 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:57.575 21:51:16 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:57.575 21:51:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.575 21:51:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:57.575 ************************************ 00:07:57.575 START TEST accel_comp 00:07:57.575 ************************************ 00:07:57.575 21:51:16 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:57.575 21:51:16 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:57.575 21:51:16 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:57.575 21:51:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:57.575 21:51:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:57.575 21:51:16 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:57.575 21:51:16 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:57.575 21:51:16 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:57.575 21:51:16 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:57.575 21:51:16 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:57.575 21:51:16 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.575 21:51:16 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.575 21:51:16 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:57.575 21:51:16 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:57.575 21:51:16 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:57.575 [2024-07-13 21:51:16.776209] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:57.575 [2024-07-13 21:51:16.776339] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3955859 ] 00:07:57.575 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.575 [2024-07-13 21:51:16.908864] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.832 [2024-07-13 21:51:17.170620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.089 21:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:58.089 21:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.089 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.089 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.089 21:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:58.089 21:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.089 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.089 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.089 21:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:58.089 21:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.089 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.089 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.089 21:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:58.090 21:51:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.620 21:51:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.620 21:51:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.620 21:51:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.620 21:51:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.620 21:51:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.620 21:51:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.620 21:51:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.620 21:51:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.620 21:51:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.620 21:51:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.620 21:51:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.620 21:51:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.620 21:51:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.620 21:51:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.620 21:51:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.620 21:51:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.620 21:51:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.620 21:51:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.620 21:51:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.620 21:51:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.620 21:51:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.620 21:51:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.620 21:51:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.620 21:51:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.620 21:51:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:00.620 21:51:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:00.620 21:51:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:00.620 00:08:00.620 real 0m2.710s 00:08:00.620 user 0m2.458s 00:08:00.620 sys 0m0.250s 00:08:00.620 21:51:19 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.620 21:51:19 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:00.620 ************************************ 00:08:00.620 END TEST accel_comp 00:08:00.620 ************************************ 00:08:00.620 21:51:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:00.620 21:51:19 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:00.620 21:51:19 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:00.620 21:51:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.620 21:51:19 accel -- common/autotest_common.sh@10 -- # set +x 00:08:00.620 ************************************ 00:08:00.620 START TEST accel_decomp 00:08:00.620 ************************************ 00:08:00.620 21:51:19 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:00.620 21:51:19 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:00.620 21:51:19 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:00.620 21:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.620 21:51:19 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:00.620 21:51:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.620 21:51:19 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:08:00.620 21:51:19 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:00.620 21:51:19 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.620 21:51:19 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.620 21:51:19 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.620 21:51:19 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.620 21:51:19 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.620 21:51:19 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:00.620 21:51:19 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:00.620 [2024-07-13 21:51:19.535046] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:00.620 [2024-07-13 21:51:19.535188] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3956250 ] 00:08:00.620 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.620 [2024-07-13 21:51:19.678588] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.620 [2024-07-13 21:51:19.940035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.879 21:51:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:00.879 21:51:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.879 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.879 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.879 21:51:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:00.879 21:51:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.879 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.879 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.879 21:51:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:00.879 21:51:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.879 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.879 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.879 21:51:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:00.879 21:51:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.879 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.879 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.879 21:51:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:00.880 21:51:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:03.409 21:51:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:03.409 21:51:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.409 21:51:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:03.409 21:51:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:03.409 21:51:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:03.409 21:51:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.409 21:51:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:03.409 21:51:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:03.409 21:51:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:03.409 21:51:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.409 21:51:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:03.409 21:51:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:03.409 21:51:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:03.409 21:51:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.409 21:51:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:03.409 21:51:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:03.409 21:51:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:03.409 21:51:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.409 21:51:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:03.409 21:51:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:03.409 21:51:22 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:03.409 21:51:22 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:03.409 21:51:22 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:03.409 21:51:22 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:03.409 21:51:22 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:03.409 21:51:22 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:03.409 21:51:22 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:03.409 00:08:03.409 real 0m2.713s 00:08:03.409 user 0m0.011s 00:08:03.409 sys 0m0.002s 00:08:03.409 21:51:22 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.409 21:51:22 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:03.409 ************************************ 00:08:03.409 END TEST accel_decomp 00:08:03.409 ************************************ 00:08:03.409 21:51:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:03.409 21:51:22 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:03.409 21:51:22 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:03.409 21:51:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.409 21:51:22 accel -- common/autotest_common.sh@10 -- # set +x 00:08:03.409 ************************************ 00:08:03.409 START TEST accel_decomp_full 00:08:03.409 ************************************ 00:08:03.409 21:51:22 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:03.409 21:51:22 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:03.409 21:51:22 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:03.409 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.409 21:51:22 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:03.409 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.409 21:51:22 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:03.409 21:51:22 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:03.409 21:51:22 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:03.409 21:51:22 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:03.409 21:51:22 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.409 21:51:22 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.409 21:51:22 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:03.409 21:51:22 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:03.409 21:51:22 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:03.409 [2024-07-13 21:51:22.292461] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:03.409 [2024-07-13 21:51:22.292584] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3956544 ] 00:08:03.409 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.409 [2024-07-13 21:51:22.422147] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.409 [2024-07-13 21:51:22.680534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.667 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.668 21:51:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:03.668 21:51:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.668 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.668 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:03.668 21:51:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:03.668 21:51:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:03.668 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:03.668 21:51:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:05.566 21:51:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:05.566 21:51:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:05.566 21:51:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:05.566 21:51:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:05.566 21:51:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:05.566 21:51:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:05.566 21:51:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:05.566 21:51:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:05.566 21:51:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:05.566 21:51:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:05.566 21:51:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:05.566 21:51:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:05.566 21:51:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:05.566 21:51:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:05.566 21:51:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:05.566 21:51:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:05.566 21:51:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:05.566 21:51:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:05.566 21:51:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:05.566 21:51:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:05.566 21:51:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:05.566 21:51:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:05.566 21:51:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:05.566 21:51:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:05.566 21:51:24 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:05.825 21:51:24 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:05.825 21:51:24 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:05.825 00:08:05.825 real 0m2.709s 00:08:05.825 user 0m2.469s 00:08:05.825 sys 0m0.237s 00:08:05.825 21:51:24 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.825 21:51:24 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:05.825 ************************************ 00:08:05.825 END TEST accel_decomp_full 00:08:05.825 ************************************ 00:08:05.825 21:51:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:05.825 21:51:24 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:05.825 21:51:24 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:05.825 21:51:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.825 21:51:24 accel -- common/autotest_common.sh@10 -- # set +x 00:08:05.825 ************************************ 00:08:05.825 START TEST accel_decomp_mcore 00:08:05.825 ************************************ 00:08:05.825 21:51:25 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:05.825 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:05.825 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:05.825 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.825 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:05.825 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.825 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:05.825 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:05.825 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:05.825 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:05.825 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.825 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.825 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:05.825 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:05.825 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:05.825 [2024-07-13 21:51:25.049347] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:05.825 [2024-07-13 21:51:25.049465] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3956948 ] 00:08:05.825 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.825 [2024-07-13 21:51:25.178286] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.083 [2024-07-13 21:51:25.444213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.083 [2024-07-13 21:51:25.444268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.083 [2024-07-13 21:51:25.444316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.083 [2024-07-13 21:51:25.444326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:06.341 21:51:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:08.903 00:08:08.903 real 0m2.729s 00:08:08.903 user 0m0.014s 00:08:08.903 sys 0m0.001s 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.903 21:51:27 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:08.903 ************************************ 00:08:08.903 END TEST accel_decomp_mcore 00:08:08.903 ************************************ 00:08:08.903 21:51:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:08.903 21:51:27 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:08.903 21:51:27 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:08.903 21:51:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.903 21:51:27 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.903 ************************************ 00:08:08.903 START TEST accel_decomp_full_mcore 00:08:08.903 ************************************ 00:08:08.904 21:51:27 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:08.904 21:51:27 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:08.904 21:51:27 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:08.904 21:51:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:08.904 21:51:27 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:08.904 21:51:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:08.904 21:51:27 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:08.904 21:51:27 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:08.904 21:51:27 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.904 21:51:27 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.904 21:51:27 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.904 21:51:27 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.904 21:51:27 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.904 21:51:27 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:08.904 21:51:27 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:08.904 [2024-07-13 21:51:27.826444] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:08.904 [2024-07-13 21:51:27.826569] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3957250 ] 00:08:08.904 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.904 [2024-07-13 21:51:27.958940] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:08.904 [2024-07-13 21:51:28.226961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.904 [2024-07-13 21:51:28.227032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:08.904 [2024-07-13 21:51:28.227083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.904 [2024-07-13 21:51:28.227091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:09.162 21:51:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:11.694 00:08:11.694 real 0m2.702s 00:08:11.694 user 0m0.011s 00:08:11.694 sys 0m0.004s 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.694 21:51:30 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:11.694 ************************************ 00:08:11.694 END TEST accel_decomp_full_mcore 00:08:11.694 ************************************ 00:08:11.694 21:51:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:11.694 21:51:30 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:11.694 21:51:30 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:11.694 21:51:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.694 21:51:30 accel -- common/autotest_common.sh@10 -- # set +x 00:08:11.694 ************************************ 00:08:11.694 START TEST accel_decomp_mthread 00:08:11.694 ************************************ 00:08:11.694 21:51:30 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:11.694 21:51:30 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:11.694 21:51:30 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:11.694 21:51:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.694 21:51:30 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:11.694 21:51:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.694 21:51:30 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:11.694 21:51:30 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:11.694 21:51:30 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:11.694 21:51:30 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:11.694 21:51:30 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:11.694 21:51:30 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:11.694 21:51:30 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:11.694 21:51:30 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:11.694 21:51:30 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:11.694 [2024-07-13 21:51:30.573062] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:11.694 [2024-07-13 21:51:30.573205] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3957659 ] 00:08:11.694 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.694 [2024-07-13 21:51:30.705197] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.694 [2024-07-13 21:51:30.966086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:11.953 21:51:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.855 00:08:13.855 real 0m2.697s 00:08:13.855 user 0m0.012s 00:08:13.855 sys 0m0.003s 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.855 21:51:33 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:13.855 ************************************ 00:08:13.855 END TEST accel_decomp_mthread 00:08:13.855 ************************************ 00:08:14.112 21:51:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:14.112 21:51:33 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:14.112 21:51:33 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:14.112 21:51:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.112 21:51:33 accel -- common/autotest_common.sh@10 -- # set +x 00:08:14.112 ************************************ 00:08:14.112 START TEST accel_decomp_full_mthread 00:08:14.112 ************************************ 00:08:14.112 21:51:33 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:14.112 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:14.112 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:14.112 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:14.112 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:14.112 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:14.112 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:14.112 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:14.112 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:14.112 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:14.112 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:14.112 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.112 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:14.112 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:14.112 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:14.112 [2024-07-13 21:51:33.322578] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:14.112 [2024-07-13 21:51:33.322700] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3957950 ] 00:08:14.112 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.112 [2024-07-13 21:51:33.455970] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.370 [2024-07-13 21:51:33.716560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.627 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:14.627 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:14.627 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:14.628 21:51:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:17.154 00:08:17.154 real 0m2.747s 00:08:17.154 user 0m2.499s 00:08:17.154 sys 0m0.246s 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.154 21:51:36 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:17.154 ************************************ 00:08:17.154 END TEST accel_decomp_full_mthread 00:08:17.154 ************************************ 00:08:17.154 21:51:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:17.154 21:51:36 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:17.154 21:51:36 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:17.154 21:51:36 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:17.154 21:51:36 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:17.154 21:51:36 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:17.154 21:51:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.154 21:51:36 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:17.154 21:51:36 accel -- common/autotest_common.sh@10 -- # set +x 00:08:17.154 21:51:36 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.154 21:51:36 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.154 21:51:36 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:17.154 21:51:36 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:17.154 21:51:36 accel -- accel/accel.sh@41 -- # jq -r . 00:08:17.154 ************************************ 00:08:17.154 START TEST accel_dif_functional_tests 00:08:17.154 ************************************ 00:08:17.154 21:51:36 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:17.154 [2024-07-13 21:51:36.166565] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:17.154 [2024-07-13 21:51:36.166740] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3958268 ] 00:08:17.154 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.154 [2024-07-13 21:51:36.312876] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:17.412 [2024-07-13 21:51:36.583699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.412 [2024-07-13 21:51:36.583743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.412 [2024-07-13 21:51:36.583754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.671 00:08:17.671 00:08:17.671 CUnit - A unit testing framework for C - Version 2.1-3 00:08:17.671 http://cunit.sourceforge.net/ 00:08:17.671 00:08:17.671 00:08:17.671 Suite: accel_dif 00:08:17.671 Test: verify: DIF generated, GUARD check ...passed 00:08:17.671 Test: verify: DIF generated, APPTAG check ...passed 00:08:17.671 Test: verify: DIF generated, REFTAG check ...passed 00:08:17.671 Test: verify: DIF not generated, GUARD check ...[2024-07-13 21:51:36.945506] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:17.671 passed 00:08:17.671 Test: verify: DIF not generated, APPTAG check ...[2024-07-13 21:51:36.945633] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:17.671 passed 00:08:17.671 Test: verify: DIF not generated, REFTAG check ...[2024-07-13 21:51:36.945704] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:17.671 passed 00:08:17.671 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:17.671 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-13 21:51:36.945843] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:17.671 passed 00:08:17.671 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:17.671 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:17.671 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:17.671 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-13 21:51:36.946138] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:17.671 passed 00:08:17.671 Test: verify copy: DIF generated, GUARD check ...passed 00:08:17.671 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:17.671 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:17.671 Test: verify copy: DIF not generated, GUARD check ...[2024-07-13 21:51:36.946459] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:17.671 passed 00:08:17.671 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-13 21:51:36.946556] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:17.671 passed 00:08:17.671 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-13 21:51:36.946644] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:17.671 passed 00:08:17.671 Test: generate copy: DIF generated, GUARD check ...passed 00:08:17.671 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:17.671 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:17.671 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:17.671 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:17.671 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:17.671 Test: generate copy: iovecs-len validate ...[2024-07-13 21:51:36.947146] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:17.671 passed 00:08:17.671 Test: generate copy: buffer alignment validate ...passed 00:08:17.671 00:08:17.671 Run Summary: Type Total Ran Passed Failed Inactive 00:08:17.671 suites 1 1 n/a 0 0 00:08:17.671 tests 26 26 26 0 0 00:08:17.671 asserts 115 115 115 0 n/a 00:08:17.671 00:08:17.671 Elapsed time = 0.005 seconds 00:08:19.046 00:08:19.046 real 0m2.200s 00:08:19.046 user 0m4.282s 00:08:19.046 sys 0m0.342s 00:08:19.046 21:51:38 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.046 21:51:38 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:19.046 ************************************ 00:08:19.046 END TEST accel_dif_functional_tests 00:08:19.046 ************************************ 00:08:19.046 21:51:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:19.046 00:08:19.046 real 1m4.968s 00:08:19.046 user 1m11.675s 00:08:19.046 sys 0m7.303s 00:08:19.046 21:51:38 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.046 21:51:38 accel -- common/autotest_common.sh@10 -- # set +x 00:08:19.046 ************************************ 00:08:19.046 END TEST accel 00:08:19.046 ************************************ 00:08:19.046 21:51:38 -- common/autotest_common.sh@1142 -- # return 0 00:08:19.046 21:51:38 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:19.046 21:51:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:19.046 21:51:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.046 21:51:38 -- common/autotest_common.sh@10 -- # set +x 00:08:19.046 ************************************ 00:08:19.046 START TEST accel_rpc 00:08:19.046 ************************************ 00:08:19.046 21:51:38 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:19.047 * Looking for test storage... 00:08:19.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:08:19.047 21:51:38 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:19.047 21:51:38 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3958641 00:08:19.047 21:51:38 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:19.047 21:51:38 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3958641 00:08:19.047 21:51:38 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 3958641 ']' 00:08:19.047 21:51:38 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.047 21:51:38 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:19.047 21:51:38 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.047 21:51:38 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:19.047 21:51:38 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:19.304 [2024-07-13 21:51:38.497933] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:19.304 [2024-07-13 21:51:38.498084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3958641 ] 00:08:19.304 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.304 [2024-07-13 21:51:38.645864] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.561 [2024-07-13 21:51:38.900374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.127 21:51:39 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:20.127 21:51:39 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:20.127 21:51:39 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:20.127 21:51:39 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:20.127 21:51:39 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:20.127 21:51:39 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:20.127 21:51:39 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:20.127 21:51:39 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:20.127 21:51:39 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.127 21:51:39 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.383 ************************************ 00:08:20.383 START TEST accel_assign_opcode 00:08:20.383 ************************************ 00:08:20.383 21:51:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:08:20.383 21:51:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:20.383 21:51:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.383 21:51:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:20.383 [2024-07-13 21:51:39.534932] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:20.383 21:51:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.383 21:51:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:20.383 21:51:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.383 21:51:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:20.383 [2024-07-13 21:51:39.542895] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:20.383 21:51:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.383 21:51:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:20.383 21:51:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.383 21:51:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:21.314 21:51:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.315 21:51:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:21.315 21:51:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.315 21:51:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:21.315 21:51:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:21.315 21:51:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:21.315 21:51:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.315 software 00:08:21.315 00:08:21.315 real 0m0.954s 00:08:21.315 user 0m0.040s 00:08:21.315 sys 0m0.007s 00:08:21.315 21:51:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.315 21:51:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:21.315 ************************************ 00:08:21.315 END TEST accel_assign_opcode 00:08:21.315 ************************************ 00:08:21.315 21:51:40 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:21.315 21:51:40 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3958641 00:08:21.315 21:51:40 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 3958641 ']' 00:08:21.315 21:51:40 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 3958641 00:08:21.315 21:51:40 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:08:21.315 21:51:40 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:21.315 21:51:40 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3958641 00:08:21.315 21:51:40 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:21.315 21:51:40 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:21.315 21:51:40 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3958641' 00:08:21.315 killing process with pid 3958641 00:08:21.315 21:51:40 accel_rpc -- common/autotest_common.sh@967 -- # kill 3958641 00:08:21.315 21:51:40 accel_rpc -- common/autotest_common.sh@972 -- # wait 3958641 00:08:23.844 00:08:23.844 real 0m4.711s 00:08:23.844 user 0m4.745s 00:08:23.844 sys 0m0.643s 00:08:23.844 21:51:43 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.844 21:51:43 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.844 ************************************ 00:08:23.844 END TEST accel_rpc 00:08:23.844 ************************************ 00:08:23.844 21:51:43 -- common/autotest_common.sh@1142 -- # return 0 00:08:23.844 21:51:43 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:23.844 21:51:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:23.844 21:51:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.844 21:51:43 -- common/autotest_common.sh@10 -- # set +x 00:08:23.844 ************************************ 00:08:23.844 START TEST app_cmdline 00:08:23.844 ************************************ 00:08:23.844 21:51:43 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:23.844 * Looking for test storage... 00:08:23.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:23.844 21:51:43 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:23.844 21:51:43 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3959294 00:08:23.844 21:51:43 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:23.844 21:51:43 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3959294 00:08:23.844 21:51:43 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 3959294 ']' 00:08:23.844 21:51:43 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.844 21:51:43 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:23.844 21:51:43 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.844 21:51:43 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:23.844 21:51:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:24.101 [2024-07-13 21:51:43.262522] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:24.101 [2024-07-13 21:51:43.262681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3959294 ] 00:08:24.101 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.101 [2024-07-13 21:51:43.386311] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.358 [2024-07-13 21:51:43.642876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.324 21:51:44 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:25.324 21:51:44 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:08:25.324 21:51:44 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:25.582 { 00:08:25.582 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:08:25.582 "fields": { 00:08:25.582 "major": 24, 00:08:25.582 "minor": 9, 00:08:25.582 "patch": 0, 00:08:25.582 "suffix": "-pre", 00:08:25.582 "commit": "719d03c6a" 00:08:25.582 } 00:08:25.582 } 00:08:25.582 21:51:44 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:25.582 21:51:44 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:25.582 21:51:44 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:25.582 21:51:44 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:25.582 21:51:44 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:25.582 21:51:44 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.582 21:51:44 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:25.582 21:51:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:25.582 21:51:44 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:25.582 21:51:44 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.582 21:51:44 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:25.582 21:51:44 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:25.582 21:51:44 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:25.582 21:51:44 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:25.582 21:51:44 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:25.582 21:51:44 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:25.582 21:51:44 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:25.582 21:51:44 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:25.582 21:51:44 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:25.583 21:51:44 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:25.583 21:51:44 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:25.583 21:51:44 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:25.583 21:51:44 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:25.583 21:51:44 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:25.841 request: 00:08:25.841 { 00:08:25.841 "method": "env_dpdk_get_mem_stats", 00:08:25.841 "req_id": 1 00:08:25.841 } 00:08:25.841 Got JSON-RPC error response 00:08:25.841 response: 00:08:25.841 { 00:08:25.841 "code": -32601, 00:08:25.841 "message": "Method not found" 00:08:25.841 } 00:08:25.841 21:51:45 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:25.841 21:51:45 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:25.841 21:51:45 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:25.841 21:51:45 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:25.841 21:51:45 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3959294 00:08:25.841 21:51:45 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 3959294 ']' 00:08:25.841 21:51:45 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 3959294 00:08:25.841 21:51:45 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:25.841 21:51:45 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:25.841 21:51:45 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3959294 00:08:25.841 21:51:45 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:25.841 21:51:45 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:25.841 21:51:45 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3959294' 00:08:25.841 killing process with pid 3959294 00:08:25.841 21:51:45 app_cmdline -- common/autotest_common.sh@967 -- # kill 3959294 00:08:25.841 21:51:45 app_cmdline -- common/autotest_common.sh@972 -- # wait 3959294 00:08:28.368 00:08:28.368 real 0m4.591s 00:08:28.368 user 0m4.972s 00:08:28.368 sys 0m0.690s 00:08:28.368 21:51:47 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.368 21:51:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:28.368 ************************************ 00:08:28.368 END TEST app_cmdline 00:08:28.368 ************************************ 00:08:28.368 21:51:47 -- common/autotest_common.sh@1142 -- # return 0 00:08:28.368 21:51:47 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:28.368 21:51:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:28.368 21:51:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.368 21:51:47 -- common/autotest_common.sh@10 -- # set +x 00:08:28.368 ************************************ 00:08:28.368 START TEST version 00:08:28.368 ************************************ 00:08:28.368 21:51:47 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:28.626 * Looking for test storage... 00:08:28.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:28.626 21:51:47 version -- app/version.sh@17 -- # get_header_version major 00:08:28.626 21:51:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:28.626 21:51:47 version -- app/version.sh@14 -- # cut -f2 00:08:28.626 21:51:47 version -- app/version.sh@14 -- # tr -d '"' 00:08:28.626 21:51:47 version -- app/version.sh@17 -- # major=24 00:08:28.626 21:51:47 version -- app/version.sh@18 -- # get_header_version minor 00:08:28.626 21:51:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:28.626 21:51:47 version -- app/version.sh@14 -- # cut -f2 00:08:28.626 21:51:47 version -- app/version.sh@14 -- # tr -d '"' 00:08:28.626 21:51:47 version -- app/version.sh@18 -- # minor=9 00:08:28.626 21:51:47 version -- app/version.sh@19 -- # get_header_version patch 00:08:28.626 21:51:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:28.626 21:51:47 version -- app/version.sh@14 -- # cut -f2 00:08:28.626 21:51:47 version -- app/version.sh@14 -- # tr -d '"' 00:08:28.626 21:51:47 version -- app/version.sh@19 -- # patch=0 00:08:28.626 21:51:47 version -- app/version.sh@20 -- # get_header_version suffix 00:08:28.626 21:51:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:28.626 21:51:47 version -- app/version.sh@14 -- # cut -f2 00:08:28.626 21:51:47 version -- app/version.sh@14 -- # tr -d '"' 00:08:28.626 21:51:47 version -- app/version.sh@20 -- # suffix=-pre 00:08:28.626 21:51:47 version -- app/version.sh@22 -- # version=24.9 00:08:28.626 21:51:47 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:28.626 21:51:47 version -- app/version.sh@28 -- # version=24.9rc0 00:08:28.626 21:51:47 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:28.626 21:51:47 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:28.626 21:51:47 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:28.626 21:51:47 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:28.626 00:08:28.626 real 0m0.110s 00:08:28.626 user 0m0.059s 00:08:28.626 sys 0m0.073s 00:08:28.626 21:51:47 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.626 21:51:47 version -- common/autotest_common.sh@10 -- # set +x 00:08:28.626 ************************************ 00:08:28.626 END TEST version 00:08:28.626 ************************************ 00:08:28.626 21:51:47 -- common/autotest_common.sh@1142 -- # return 0 00:08:28.626 21:51:47 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:28.626 21:51:47 -- spdk/autotest.sh@198 -- # uname -s 00:08:28.626 21:51:47 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:28.626 21:51:47 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:28.626 21:51:47 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:28.626 21:51:47 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:28.626 21:51:47 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:28.626 21:51:47 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:28.626 21:51:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:28.626 21:51:47 -- common/autotest_common.sh@10 -- # set +x 00:08:28.626 21:51:47 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:28.626 21:51:47 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:28.626 21:51:47 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:28.626 21:51:47 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:28.626 21:51:47 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:08:28.626 21:51:47 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:08:28.626 21:51:47 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:28.626 21:51:47 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:28.626 21:51:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.626 21:51:47 -- common/autotest_common.sh@10 -- # set +x 00:08:28.626 ************************************ 00:08:28.626 START TEST nvmf_tcp 00:08:28.626 ************************************ 00:08:28.626 21:51:47 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:28.626 * Looking for test storage... 00:08:28.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:28.626 21:51:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:28.626 21:51:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:28.626 21:51:47 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:28.626 21:51:47 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:08:28.626 21:51:47 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.626 21:51:47 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.626 21:51:47 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.626 21:51:47 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.626 21:51:47 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.626 21:51:47 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.626 21:51:47 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.626 21:51:47 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.626 21:51:47 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.626 21:51:47 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.626 21:51:47 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:28.626 21:51:47 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:28.626 21:51:47 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.626 21:51:47 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.626 21:51:47 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:28.626 21:51:47 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.626 21:51:47 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:28.626 21:51:48 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.626 21:51:48 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.626 21:51:48 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.626 21:51:48 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.626 21:51:48 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.626 21:51:48 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.626 21:51:48 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:08:28.626 21:51:48 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.626 21:51:48 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:08:28.626 21:51:48 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:28.626 21:51:48 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:28.626 21:51:48 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.626 21:51:48 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.626 21:51:48 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.626 21:51:48 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:28.626 21:51:48 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:28.626 21:51:48 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:28.626 21:51:48 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:28.626 21:51:48 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:28.626 21:51:48 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:28.626 21:51:48 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:28.626 21:51:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:28.626 21:51:48 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:28.626 21:51:48 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:28.626 21:51:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:28.626 21:51:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.626 21:51:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:28.884 ************************************ 00:08:28.884 START TEST nvmf_example 00:08:28.884 ************************************ 00:08:28.884 21:51:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:28.884 * Looking for test storage... 00:08:28.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:28.884 21:51:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:28.884 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:28.884 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.884 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.884 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.884 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.884 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.884 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.884 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.884 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.884 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.884 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.884 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:28.884 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:28.884 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.884 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.884 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:28.884 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.884 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:28.884 21:51:48 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.884 21:51:48 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.884 21:51:48 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.884 21:51:48 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.884 21:51:48 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.884 21:51:48 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.884 21:51:48 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:08:28.885 21:51:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:30.783 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:30.783 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:30.783 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:30.784 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:30.784 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:30.784 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.041 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.041 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.041 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:31.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:08:31.041 00:08:31.041 --- 10.0.0.2 ping statistics --- 00:08:31.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.041 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:08:31.042 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:08:31.042 00:08:31.042 --- 10.0.0.1 ping statistics --- 00:08:31.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.042 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:08:31.042 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.042 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:31.042 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:31.042 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.042 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:31.042 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:31.042 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.042 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:31.042 21:51:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:31.042 21:51:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:31.042 21:51:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:31.042 21:51:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:31.042 21:51:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:31.042 21:51:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:31.042 21:51:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:31.042 21:51:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3961591 00:08:31.042 21:51:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:31.042 21:51:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:31.042 21:51:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3961591 00:08:31.042 21:51:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 3961591 ']' 00:08:31.042 21:51:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.042 21:51:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:31.042 21:51:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.042 21:51:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:31.042 21:51:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:31.042 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.975 21:51:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:31.975 21:51:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:08:31.975 21:51:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:31.975 21:51:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:31.975 21:51:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:31.975 21:51:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:31.975 21:51:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.975 21:51:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:31.975 21:51:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.975 21:51:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:31.975 21:51:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.975 21:51:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:32.233 21:51:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.234 21:51:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:32.234 21:51:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:32.234 21:51:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.234 21:51:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:32.234 21:51:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.234 21:51:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:32.234 21:51:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:32.234 21:51:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.234 21:51:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:32.234 21:51:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.234 21:51:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:32.234 21:51:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.234 21:51:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:32.234 21:51:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.234 21:51:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:32.234 21:51:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:32.234 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.430 Initializing NVMe Controllers 00:08:44.430 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:44.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:44.430 Initialization complete. Launching workers. 00:08:44.430 ======================================================== 00:08:44.430 Latency(us) 00:08:44.430 Device Information : IOPS MiB/s Average min max 00:08:44.430 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11388.58 44.49 5619.27 1277.32 22987.71 00:08:44.430 ======================================================== 00:08:44.430 Total : 11388.58 44.49 5619.27 1277.32 22987.71 00:08:44.430 00:08:44.430 21:52:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:44.430 21:52:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:44.430 21:52:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:44.430 21:52:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:44.430 21:52:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:44.430 21:52:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:44.430 21:52:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:44.430 21:52:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:44.430 rmmod nvme_tcp 00:08:44.430 rmmod nvme_fabrics 00:08:44.430 rmmod nvme_keyring 00:08:44.430 21:52:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:44.430 21:52:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:44.430 21:52:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:44.430 21:52:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3961591 ']' 00:08:44.430 21:52:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3961591 00:08:44.430 21:52:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 3961591 ']' 00:08:44.430 21:52:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 3961591 00:08:44.430 21:52:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:08:44.430 21:52:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:44.430 21:52:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3961591 00:08:44.430 21:52:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:08:44.430 21:52:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:08:44.430 21:52:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3961591' 00:08:44.430 killing process with pid 3961591 00:08:44.430 21:52:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 3961591 00:08:44.430 21:52:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 3961591 00:08:44.430 nvmf threads initialize successfully 00:08:44.430 bdev subsystem init successfully 00:08:44.430 created a nvmf target service 00:08:44.430 create targets's poll groups done 00:08:44.430 all subsystems of target started 00:08:44.430 nvmf target is running 00:08:44.430 all subsystems of target stopped 00:08:44.430 destroy targets's poll groups done 00:08:44.430 destroyed the nvmf target service 00:08:44.430 bdev subsystem finish successfully 00:08:44.430 nvmf threads destroy successfully 00:08:44.430 21:52:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:44.430 21:52:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:44.430 21:52:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:44.430 21:52:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:44.430 21:52:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:44.430 21:52:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.430 21:52:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.430 21:52:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.808 21:52:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:45.808 21:52:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:45.808 21:52:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:45.808 21:52:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:45.808 00:08:45.808 real 0m17.019s 00:08:45.808 user 0m43.666s 00:08:45.808 sys 0m4.903s 00:08:45.808 21:52:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:45.808 21:52:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:45.808 ************************************ 00:08:45.808 END TEST nvmf_example 00:08:45.808 ************************************ 00:08:45.808 21:52:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:45.808 21:52:05 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:45.808 21:52:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:45.808 21:52:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.808 21:52:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:45.808 ************************************ 00:08:45.808 START TEST nvmf_filesystem 00:08:45.808 ************************************ 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:45.808 * Looking for test storage... 00:08:45.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:45.808 21:52:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:45.809 #define SPDK_CONFIG_H 00:08:45.809 #define SPDK_CONFIG_APPS 1 00:08:45.809 #define SPDK_CONFIG_ARCH native 00:08:45.809 #define SPDK_CONFIG_ASAN 1 00:08:45.809 #undef SPDK_CONFIG_AVAHI 00:08:45.809 #undef SPDK_CONFIG_CET 00:08:45.809 #define SPDK_CONFIG_COVERAGE 1 00:08:45.809 #define SPDK_CONFIG_CROSS_PREFIX 00:08:45.809 #undef SPDK_CONFIG_CRYPTO 00:08:45.809 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:45.809 #undef SPDK_CONFIG_CUSTOMOCF 00:08:45.809 #undef SPDK_CONFIG_DAOS 00:08:45.809 #define SPDK_CONFIG_DAOS_DIR 00:08:45.809 #define SPDK_CONFIG_DEBUG 1 00:08:45.809 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:45.809 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:45.809 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:45.809 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:45.809 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:45.809 #undef SPDK_CONFIG_DPDK_UADK 00:08:45.809 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:45.809 #define SPDK_CONFIG_EXAMPLES 1 00:08:45.809 #undef SPDK_CONFIG_FC 00:08:45.809 #define SPDK_CONFIG_FC_PATH 00:08:45.809 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:45.809 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:45.809 #undef SPDK_CONFIG_FUSE 00:08:45.809 #undef SPDK_CONFIG_FUZZER 00:08:45.809 #define SPDK_CONFIG_FUZZER_LIB 00:08:45.809 #undef SPDK_CONFIG_GOLANG 00:08:45.809 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:45.809 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:45.809 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:45.809 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:45.809 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:45.809 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:45.809 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:45.809 #define SPDK_CONFIG_IDXD 1 00:08:45.809 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:45.809 #undef SPDK_CONFIG_IPSEC_MB 00:08:45.809 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:45.809 #define SPDK_CONFIG_ISAL 1 00:08:45.809 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:45.809 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:45.809 #define SPDK_CONFIG_LIBDIR 00:08:45.809 #undef SPDK_CONFIG_LTO 00:08:45.809 #define SPDK_CONFIG_MAX_LCORES 128 00:08:45.809 #define SPDK_CONFIG_NVME_CUSE 1 00:08:45.809 #undef SPDK_CONFIG_OCF 00:08:45.809 #define SPDK_CONFIG_OCF_PATH 00:08:45.809 #define SPDK_CONFIG_OPENSSL_PATH 00:08:45.809 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:45.809 #define SPDK_CONFIG_PGO_DIR 00:08:45.809 #undef SPDK_CONFIG_PGO_USE 00:08:45.809 #define SPDK_CONFIG_PREFIX /usr/local 00:08:45.809 #undef SPDK_CONFIG_RAID5F 00:08:45.809 #undef SPDK_CONFIG_RBD 00:08:45.809 #define SPDK_CONFIG_RDMA 1 00:08:45.809 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:45.809 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:45.809 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:45.809 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:45.809 #define SPDK_CONFIG_SHARED 1 00:08:45.809 #undef SPDK_CONFIG_SMA 00:08:45.809 #define SPDK_CONFIG_TESTS 1 00:08:45.809 #undef SPDK_CONFIG_TSAN 00:08:45.809 #define SPDK_CONFIG_UBLK 1 00:08:45.809 #define SPDK_CONFIG_UBSAN 1 00:08:45.809 #undef SPDK_CONFIG_UNIT_TESTS 00:08:45.809 #undef SPDK_CONFIG_URING 00:08:45.809 #define SPDK_CONFIG_URING_PATH 00:08:45.809 #undef SPDK_CONFIG_URING_ZNS 00:08:45.809 #undef SPDK_CONFIG_USDT 00:08:45.809 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:45.809 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:45.809 #undef SPDK_CONFIG_VFIO_USER 00:08:45.809 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:45.809 #define SPDK_CONFIG_VHOST 1 00:08:45.809 #define SPDK_CONFIG_VIRTIO 1 00:08:45.809 #undef SPDK_CONFIG_VTUNE 00:08:45.809 #define SPDK_CONFIG_VTUNE_DIR 00:08:45.809 #define SPDK_CONFIG_WERROR 1 00:08:45.809 #define SPDK_CONFIG_WPDK_DIR 00:08:45.809 #undef SPDK_CONFIG_XNVME 00:08:45.809 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:08:45.809 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 1 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:08:45.810 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 3963428 ]] 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 3963428 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.7AjaqB 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.7AjaqB/tests/target /tmp/spdk.7AjaqB 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:08:46.070 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=55276040192 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994708992 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6718668800 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941716480 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997352448 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390182912 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398944256 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996267008 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997356544 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1089536 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:46.071 * Looking for test storage... 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=55276040192 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8933261312 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:46.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:46.071 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:46.072 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.072 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:46.072 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:46.072 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:46.072 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.072 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:46.072 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.072 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:46.072 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:46.072 21:52:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:46.072 21:52:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:47.977 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:47.977 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:47.977 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:47.977 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.977 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:47.978 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:47.978 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:47.978 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:47.978 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:47.978 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:47.978 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:47.978 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:47.978 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:47.978 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:47.978 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:47.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:47.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:08:47.978 00:08:47.978 --- 10.0.0.2 ping statistics --- 00:08:47.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.978 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:08:47.978 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:47.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:47.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:08:47.978 00:08:47.978 --- 10.0.0.1 ping statistics --- 00:08:47.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.978 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:08:47.978 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:47.978 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:47.978 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:47.978 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:47.978 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:47.978 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:47.978 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:47.978 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:47.978 21:52:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:48.236 21:52:07 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:48.236 21:52:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:48.236 21:52:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.236 21:52:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:48.236 ************************************ 00:08:48.236 START TEST nvmf_filesystem_no_in_capsule 00:08:48.236 ************************************ 00:08:48.237 21:52:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:08:48.237 21:52:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:48.237 21:52:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:48.237 21:52:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:48.237 21:52:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:48.237 21:52:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:48.237 21:52:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3965052 00:08:48.237 21:52:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:48.237 21:52:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3965052 00:08:48.237 21:52:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3965052 ']' 00:08:48.237 21:52:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.237 21:52:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:48.237 21:52:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.237 21:52:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:48.237 21:52:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:48.237 [2024-07-13 21:52:07.498801] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:48.237 [2024-07-13 21:52:07.498986] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.237 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.494 [2024-07-13 21:52:07.636587] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:48.755 [2024-07-13 21:52:07.898530] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.755 [2024-07-13 21:52:07.898590] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.755 [2024-07-13 21:52:07.898618] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.755 [2024-07-13 21:52:07.898638] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.755 [2024-07-13 21:52:07.898659] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.756 [2024-07-13 21:52:07.898788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.756 [2024-07-13 21:52:07.898860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:48.756 [2024-07-13 21:52:07.898897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.756 [2024-07-13 21:52:07.898921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.363 21:52:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:49.363 21:52:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:49.363 21:52:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:49.363 21:52:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:49.363 21:52:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:49.363 21:52:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.363 21:52:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:49.363 21:52:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:49.363 21:52:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.363 21:52:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:49.363 [2024-07-13 21:52:08.485551] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.363 21:52:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.363 21:52:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:49.363 21:52:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.363 21:52:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:49.933 Malloc1 00:08:49.933 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.933 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:49.933 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.933 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:49.933 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.933 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:49.933 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.933 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:49.933 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.933 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.934 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.934 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:49.934 [2024-07-13 21:52:09.079311] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.934 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.934 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:49.934 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:49.934 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:49.934 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:49.934 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:49.934 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:49.934 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.934 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:49.934 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.934 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:49.934 { 00:08:49.934 "name": "Malloc1", 00:08:49.934 "aliases": [ 00:08:49.934 "7a4716df-5f9f-47f9-8c9f-dbd9f0064980" 00:08:49.934 ], 00:08:49.934 "product_name": "Malloc disk", 00:08:49.934 "block_size": 512, 00:08:49.934 "num_blocks": 1048576, 00:08:49.934 "uuid": "7a4716df-5f9f-47f9-8c9f-dbd9f0064980", 00:08:49.934 "assigned_rate_limits": { 00:08:49.934 "rw_ios_per_sec": 0, 00:08:49.934 "rw_mbytes_per_sec": 0, 00:08:49.934 "r_mbytes_per_sec": 0, 00:08:49.934 "w_mbytes_per_sec": 0 00:08:49.934 }, 00:08:49.934 "claimed": true, 00:08:49.934 "claim_type": "exclusive_write", 00:08:49.934 "zoned": false, 00:08:49.934 "supported_io_types": { 00:08:49.934 "read": true, 00:08:49.934 "write": true, 00:08:49.934 "unmap": true, 00:08:49.934 "flush": true, 00:08:49.934 "reset": true, 00:08:49.934 "nvme_admin": false, 00:08:49.934 "nvme_io": false, 00:08:49.934 "nvme_io_md": false, 00:08:49.934 "write_zeroes": true, 00:08:49.934 "zcopy": true, 00:08:49.934 "get_zone_info": false, 00:08:49.934 "zone_management": false, 00:08:49.934 "zone_append": false, 00:08:49.934 "compare": false, 00:08:49.934 "compare_and_write": false, 00:08:49.934 "abort": true, 00:08:49.934 "seek_hole": false, 00:08:49.934 "seek_data": false, 00:08:49.934 "copy": true, 00:08:49.934 "nvme_iov_md": false 00:08:49.934 }, 00:08:49.934 "memory_domains": [ 00:08:49.934 { 00:08:49.934 "dma_device_id": "system", 00:08:49.934 "dma_device_type": 1 00:08:49.934 }, 00:08:49.934 { 00:08:49.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.934 "dma_device_type": 2 00:08:49.934 } 00:08:49.934 ], 00:08:49.934 "driver_specific": {} 00:08:49.934 } 00:08:49.934 ]' 00:08:49.934 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:49.934 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:49.934 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:49.934 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:49.934 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:49.934 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:49.934 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:49.934 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:50.505 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:50.505 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:50.505 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:50.505 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:50.505 21:52:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:53.041 21:52:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:53.041 21:52:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:53.041 21:52:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:53.041 21:52:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:53.041 21:52:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:53.041 21:52:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:53.041 21:52:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:53.041 21:52:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:53.041 21:52:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:53.041 21:52:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:53.041 21:52:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:53.041 21:52:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:53.041 21:52:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:53.041 21:52:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:53.041 21:52:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:53.041 21:52:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:53.041 21:52:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:53.041 21:52:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:53.978 21:52:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:54.913 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:54.913 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:54.913 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:54.913 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.913 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:54.913 ************************************ 00:08:54.913 START TEST filesystem_ext4 00:08:54.913 ************************************ 00:08:54.914 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:54.914 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:54.914 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:54.914 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:54.914 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:54.914 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:54.914 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:54.914 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:54.914 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:54.914 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:54.914 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:54.914 mke2fs 1.46.5 (30-Dec-2021) 00:08:55.173 Discarding device blocks: 0/522240 done 00:08:55.173 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:55.173 Filesystem UUID: 09039652-7f83-4e32-b53d-9f94789fff8d 00:08:55.173 Superblock backups stored on blocks: 00:08:55.173 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:55.173 00:08:55.173 Allocating group tables: 0/64 done 00:08:55.173 Writing inode tables: 0/64 done 00:08:55.173 Creating journal (8192 blocks): done 00:08:55.173 Writing superblocks and filesystem accounting information: 0/64 done 00:08:55.173 00:08:55.173 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:55.173 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3965052 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:55.432 00:08:55.432 real 0m0.573s 00:08:55.432 user 0m0.021s 00:08:55.432 sys 0m0.051s 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:55.432 ************************************ 00:08:55.432 END TEST filesystem_ext4 00:08:55.432 ************************************ 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:55.432 ************************************ 00:08:55.432 START TEST filesystem_btrfs 00:08:55.432 ************************************ 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:55.432 21:52:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:56.001 btrfs-progs v6.6.2 00:08:56.001 See https://btrfs.readthedocs.io for more information. 00:08:56.001 00:08:56.001 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:56.001 NOTE: several default settings have changed in version 5.15, please make sure 00:08:56.001 this does not affect your deployments: 00:08:56.001 - DUP for metadata (-m dup) 00:08:56.001 - enabled no-holes (-O no-holes) 00:08:56.001 - enabled free-space-tree (-R free-space-tree) 00:08:56.001 00:08:56.001 Label: (null) 00:08:56.001 UUID: 863dbdcd-9fca-44df-ab83-99ae90c0b930 00:08:56.001 Node size: 16384 00:08:56.001 Sector size: 4096 00:08:56.001 Filesystem size: 510.00MiB 00:08:56.001 Block group profiles: 00:08:56.001 Data: single 8.00MiB 00:08:56.001 Metadata: DUP 32.00MiB 00:08:56.001 System: DUP 8.00MiB 00:08:56.001 SSD detected: yes 00:08:56.001 Zoned device: no 00:08:56.001 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:56.001 Runtime features: free-space-tree 00:08:56.001 Checksum: crc32c 00:08:56.001 Number of devices: 1 00:08:56.001 Devices: 00:08:56.001 ID SIZE PATH 00:08:56.001 1 510.00MiB /dev/nvme0n1p1 00:08:56.001 00:08:56.001 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:56.001 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:56.260 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:56.260 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:56.260 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:56.260 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:56.260 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:56.260 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:56.260 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3965052 00:08:56.260 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:56.260 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:56.260 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:56.260 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:56.260 00:08:56.260 real 0m0.747s 00:08:56.260 user 0m0.013s 00:08:56.260 sys 0m0.118s 00:08:56.260 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:56.261 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:56.261 ************************************ 00:08:56.261 END TEST filesystem_btrfs 00:08:56.261 ************************************ 00:08:56.261 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:56.261 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:56.261 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:56.261 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.261 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:56.261 ************************************ 00:08:56.261 START TEST filesystem_xfs 00:08:56.261 ************************************ 00:08:56.261 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:56.261 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:56.261 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:56.261 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:56.261 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:56.261 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:56.261 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:56.261 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:08:56.261 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:56.261 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:56.261 21:52:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:56.519 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:56.519 = sectsz=512 attr=2, projid32bit=1 00:08:56.519 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:56.519 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:56.519 data = bsize=4096 blocks=130560, imaxpct=25 00:08:56.519 = sunit=0 swidth=0 blks 00:08:56.519 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:56.519 log =internal log bsize=4096 blocks=16384, version=2 00:08:56.519 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:56.519 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:57.458 Discarding blocks...Done. 00:08:57.458 21:52:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:57.458 21:52:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:59.996 21:52:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:59.996 21:52:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:59.996 21:52:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:59.996 21:52:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:59.996 21:52:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:59.996 21:52:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:59.996 21:52:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3965052 00:08:59.996 21:52:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:59.996 21:52:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:59.996 21:52:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:59.996 21:52:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:59.996 00:08:59.996 real 0m3.287s 00:08:59.996 user 0m0.017s 00:08:59.996 sys 0m0.062s 00:08:59.996 21:52:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:59.996 21:52:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:59.996 ************************************ 00:08:59.996 END TEST filesystem_xfs 00:08:59.996 ************************************ 00:08:59.996 21:52:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:59.996 21:52:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:59.996 21:52:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:59.996 21:52:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:59.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.996 21:52:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:59.996 21:52:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:59.996 21:52:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:59.996 21:52:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:59.996 21:52:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:59.996 21:52:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:59.996 21:52:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:59.996 21:52:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:59.996 21:52:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.996 21:52:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:59.996 21:52:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.996 21:52:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:59.996 21:52:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3965052 00:08:59.996 21:52:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3965052 ']' 00:08:59.996 21:52:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3965052 00:08:59.996 21:52:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:59.996 21:52:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:59.996 21:52:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3965052 00:08:59.996 21:52:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:59.996 21:52:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:59.996 21:52:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3965052' 00:08:59.996 killing process with pid 3965052 00:08:59.996 21:52:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 3965052 00:08:59.996 21:52:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 3965052 00:09:02.536 21:52:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:02.536 00:09:02.536 real 0m14.515s 00:09:02.536 user 0m53.476s 00:09:02.536 sys 0m2.045s 00:09:02.536 21:52:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:02.536 21:52:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:02.536 ************************************ 00:09:02.536 END TEST nvmf_filesystem_no_in_capsule 00:09:02.536 ************************************ 00:09:02.794 21:52:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:09:02.794 21:52:21 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:02.794 21:52:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:02.794 21:52:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.794 21:52:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:02.794 ************************************ 00:09:02.794 START TEST nvmf_filesystem_in_capsule 00:09:02.794 ************************************ 00:09:02.794 21:52:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:09:02.794 21:52:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:02.794 21:52:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:02.794 21:52:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:02.794 21:52:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:02.794 21:52:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:02.794 21:52:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3967008 00:09:02.794 21:52:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:02.794 21:52:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3967008 00:09:02.794 21:52:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3967008 ']' 00:09:02.794 21:52:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.794 21:52:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:02.794 21:52:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.794 21:52:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:02.794 21:52:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:02.794 [2024-07-13 21:52:22.065318] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:02.794 [2024-07-13 21:52:22.065478] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.794 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.053 [2024-07-13 21:52:22.210468] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:03.311 [2024-07-13 21:52:22.478984] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.311 [2024-07-13 21:52:22.479071] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.311 [2024-07-13 21:52:22.479100] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.311 [2024-07-13 21:52:22.479122] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.311 [2024-07-13 21:52:22.479144] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.311 [2024-07-13 21:52:22.479271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.311 [2024-07-13 21:52:22.479330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.311 [2024-07-13 21:52:22.479374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.311 [2024-07-13 21:52:22.479385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.877 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:03.877 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:09:03.877 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:03.877 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:03.877 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:03.877 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.877 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:03.877 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:03.877 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.877 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:03.877 [2024-07-13 21:52:23.028124] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.877 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.877 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:03.877 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.877 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:04.445 Malloc1 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:04.445 [2024-07-13 21:52:23.598460] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:04.445 { 00:09:04.445 "name": "Malloc1", 00:09:04.445 "aliases": [ 00:09:04.445 "1a4ff803-3764-41a7-9391-ea56ac5b3ac2" 00:09:04.445 ], 00:09:04.445 "product_name": "Malloc disk", 00:09:04.445 "block_size": 512, 00:09:04.445 "num_blocks": 1048576, 00:09:04.445 "uuid": "1a4ff803-3764-41a7-9391-ea56ac5b3ac2", 00:09:04.445 "assigned_rate_limits": { 00:09:04.445 "rw_ios_per_sec": 0, 00:09:04.445 "rw_mbytes_per_sec": 0, 00:09:04.445 "r_mbytes_per_sec": 0, 00:09:04.445 "w_mbytes_per_sec": 0 00:09:04.445 }, 00:09:04.445 "claimed": true, 00:09:04.445 "claim_type": "exclusive_write", 00:09:04.445 "zoned": false, 00:09:04.445 "supported_io_types": { 00:09:04.445 "read": true, 00:09:04.445 "write": true, 00:09:04.445 "unmap": true, 00:09:04.445 "flush": true, 00:09:04.445 "reset": true, 00:09:04.445 "nvme_admin": false, 00:09:04.445 "nvme_io": false, 00:09:04.445 "nvme_io_md": false, 00:09:04.445 "write_zeroes": true, 00:09:04.445 "zcopy": true, 00:09:04.445 "get_zone_info": false, 00:09:04.445 "zone_management": false, 00:09:04.445 "zone_append": false, 00:09:04.445 "compare": false, 00:09:04.445 "compare_and_write": false, 00:09:04.445 "abort": true, 00:09:04.445 "seek_hole": false, 00:09:04.445 "seek_data": false, 00:09:04.445 "copy": true, 00:09:04.445 "nvme_iov_md": false 00:09:04.445 }, 00:09:04.445 "memory_domains": [ 00:09:04.445 { 00:09:04.445 "dma_device_id": "system", 00:09:04.445 "dma_device_type": 1 00:09:04.445 }, 00:09:04.445 { 00:09:04.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.445 "dma_device_type": 2 00:09:04.445 } 00:09:04.445 ], 00:09:04.445 "driver_specific": {} 00:09:04.445 } 00:09:04.445 ]' 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:04.445 21:52:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:05.066 21:52:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:05.066 21:52:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:05.066 21:52:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:05.066 21:52:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:05.066 21:52:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:06.970 21:52:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:06.970 21:52:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:06.970 21:52:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:06.970 21:52:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:06.970 21:52:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:06.970 21:52:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:06.970 21:52:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:06.970 21:52:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:06.970 21:52:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:06.970 21:52:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:06.970 21:52:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:06.970 21:52:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:06.970 21:52:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:06.970 21:52:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:06.970 21:52:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:06.970 21:52:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:06.970 21:52:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:07.538 21:52:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:08.477 21:52:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:09.411 21:52:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:09.411 21:52:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:09.411 21:52:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:09.411 21:52:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.411 21:52:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:09.411 ************************************ 00:09:09.411 START TEST filesystem_in_capsule_ext4 00:09:09.411 ************************************ 00:09:09.411 21:52:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:09.411 21:52:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:09.411 21:52:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:09.411 21:52:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:09.411 21:52:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:09:09.411 21:52:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:09.411 21:52:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:09:09.411 21:52:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:09:09.411 21:52:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:09:09.411 21:52:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:09:09.412 21:52:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:09.412 mke2fs 1.46.5 (30-Dec-2021) 00:09:09.412 Discarding device blocks: 0/522240 done 00:09:09.412 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:09.412 Filesystem UUID: f363e7b6-cbd4-4615-b28e-fa10717899b9 00:09:09.412 Superblock backups stored on blocks: 00:09:09.412 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:09.412 00:09:09.412 Allocating group tables: 0/64 done 00:09:09.412 Writing inode tables: 0/64 done 00:09:09.980 Creating journal (8192 blocks): done 00:09:10.915 Writing superblocks and filesystem accounting information: 0/64 done 00:09:10.915 00:09:10.915 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:09:10.915 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:11.484 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:11.484 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:11.484 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:11.484 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:11.484 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:11.484 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:11.484 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3967008 00:09:11.484 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:11.484 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:11.484 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:11.484 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:11.484 00:09:11.484 real 0m2.199s 00:09:11.484 user 0m0.023s 00:09:11.484 sys 0m0.049s 00:09:11.484 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:11.484 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:11.484 ************************************ 00:09:11.484 END TEST filesystem_in_capsule_ext4 00:09:11.484 ************************************ 00:09:11.484 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:11.484 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:11.484 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:11.484 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.484 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:11.484 ************************************ 00:09:11.484 START TEST filesystem_in_capsule_btrfs 00:09:11.484 ************************************ 00:09:11.484 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:11.484 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:11.484 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:11.484 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:11.484 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:09:11.484 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:11.484 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:09:11.484 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:09:11.484 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:09:11.485 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:09:11.485 21:52:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:12.053 btrfs-progs v6.6.2 00:09:12.053 See https://btrfs.readthedocs.io for more information. 00:09:12.053 00:09:12.053 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:12.053 NOTE: several default settings have changed in version 5.15, please make sure 00:09:12.053 this does not affect your deployments: 00:09:12.053 - DUP for metadata (-m dup) 00:09:12.053 - enabled no-holes (-O no-holes) 00:09:12.053 - enabled free-space-tree (-R free-space-tree) 00:09:12.053 00:09:12.053 Label: (null) 00:09:12.053 UUID: ced4fc80-f95c-4171-abfe-65c018b5fb0e 00:09:12.053 Node size: 16384 00:09:12.053 Sector size: 4096 00:09:12.053 Filesystem size: 510.00MiB 00:09:12.053 Block group profiles: 00:09:12.053 Data: single 8.00MiB 00:09:12.053 Metadata: DUP 32.00MiB 00:09:12.053 System: DUP 8.00MiB 00:09:12.053 SSD detected: yes 00:09:12.053 Zoned device: no 00:09:12.053 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:12.053 Runtime features: free-space-tree 00:09:12.053 Checksum: crc32c 00:09:12.053 Number of devices: 1 00:09:12.053 Devices: 00:09:12.053 ID SIZE PATH 00:09:12.053 1 510.00MiB /dev/nvme0n1p1 00:09:12.053 00:09:12.053 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:09:12.053 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:12.053 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:12.053 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:12.053 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:12.053 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:12.053 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:12.053 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:12.053 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3967008 00:09:12.053 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:12.053 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:12.053 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:12.053 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:12.053 00:09:12.053 real 0m0.584s 00:09:12.053 user 0m0.020s 00:09:12.053 sys 0m0.111s 00:09:12.053 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:12.053 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:12.053 ************************************ 00:09:12.053 END TEST filesystem_in_capsule_btrfs 00:09:12.053 ************************************ 00:09:12.053 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:12.053 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:12.053 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:12.053 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.053 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:12.313 ************************************ 00:09:12.313 START TEST filesystem_in_capsule_xfs 00:09:12.313 ************************************ 00:09:12.313 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:09:12.313 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:12.313 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:12.313 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:12.313 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:09:12.313 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:12.313 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:09:12.313 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:09:12.314 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:09:12.314 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:09:12.314 21:52:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:12.314 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:12.314 = sectsz=512 attr=2, projid32bit=1 00:09:12.314 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:12.314 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:12.314 data = bsize=4096 blocks=130560, imaxpct=25 00:09:12.314 = sunit=0 swidth=0 blks 00:09:12.314 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:12.314 log =internal log bsize=4096 blocks=16384, version=2 00:09:12.314 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:12.314 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:13.251 Discarding blocks...Done. 00:09:13.251 21:52:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:09:13.251 21:52:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3967008 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:15.788 00:09:15.788 real 0m3.269s 00:09:15.788 user 0m0.016s 00:09:15.788 sys 0m0.059s 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:15.788 ************************************ 00:09:15.788 END TEST filesystem_in_capsule_xfs 00:09:15.788 ************************************ 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:15.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3967008 00:09:15.788 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3967008 ']' 00:09:15.789 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3967008 00:09:15.789 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:09:15.789 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:15.789 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3967008 00:09:15.789 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:15.789 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:15.789 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3967008' 00:09:15.789 killing process with pid 3967008 00:09:15.789 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 3967008 00:09:15.789 21:52:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 3967008 00:09:18.326 21:52:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:18.326 00:09:18.326 real 0m15.592s 00:09:18.326 user 0m57.640s 00:09:18.326 sys 0m2.101s 00:09:18.326 21:52:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:18.326 21:52:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:18.326 ************************************ 00:09:18.326 END TEST nvmf_filesystem_in_capsule 00:09:18.326 ************************************ 00:09:18.326 21:52:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:09:18.326 21:52:37 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:18.326 21:52:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:18.326 21:52:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:09:18.326 21:52:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:18.326 21:52:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:09:18.326 21:52:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:18.326 21:52:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:18.326 rmmod nvme_tcp 00:09:18.326 rmmod nvme_fabrics 00:09:18.326 rmmod nvme_keyring 00:09:18.326 21:52:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:18.326 21:52:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:09:18.326 21:52:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:09:18.326 21:52:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:18.326 21:52:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:18.326 21:52:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:18.326 21:52:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:18.326 21:52:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:18.326 21:52:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:18.326 21:52:37 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.326 21:52:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.326 21:52:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.861 21:52:39 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:20.861 00:09:20.861 real 0m34.604s 00:09:20.861 user 1m52.030s 00:09:20.861 sys 0m5.715s 00:09:20.861 21:52:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.861 21:52:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:20.861 ************************************ 00:09:20.861 END TEST nvmf_filesystem 00:09:20.861 ************************************ 00:09:20.861 21:52:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:20.861 21:52:39 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:20.861 21:52:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:20.861 21:52:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.861 21:52:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:20.861 ************************************ 00:09:20.861 START TEST nvmf_target_discovery 00:09:20.861 ************************************ 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:20.861 * Looking for test storage... 00:09:20.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:09:20.861 21:52:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:22.767 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:22.767 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:22.767 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:22.767 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:22.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:22.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:09:22.767 00:09:22.767 --- 10.0.0.2 ping statistics --- 00:09:22.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.767 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:22.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:22.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:09:22.767 00:09:22.767 --- 10.0.0.1 ping statistics --- 00:09:22.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.767 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:22.767 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:22.768 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:22.768 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:22.768 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:22.768 21:52:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:22.768 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:22.768 21:52:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:22.768 21:52:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.768 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3970895 00:09:22.768 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:22.768 21:52:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3970895 00:09:22.768 21:52:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 3970895 ']' 00:09:22.768 21:52:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.768 21:52:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:22.768 21:52:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.768 21:52:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:22.768 21:52:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:22.768 [2024-07-13 21:52:42.040173] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:22.768 [2024-07-13 21:52:42.040317] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.768 EAL: No free 2048 kB hugepages reported on node 1 00:09:23.063 [2024-07-13 21:52:42.185848] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:23.321 [2024-07-13 21:52:42.455477] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.321 [2024-07-13 21:52:42.455549] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.321 [2024-07-13 21:52:42.455577] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.321 [2024-07-13 21:52:42.455598] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.321 [2024-07-13 21:52:42.455620] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.321 [2024-07-13 21:52:42.455767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.321 [2024-07-13 21:52:42.455832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:23.321 [2024-07-13 21:52:42.455901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.321 [2024-07-13 21:52:42.455914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:23.887 21:52:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:23.887 21:52:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:09:23.887 21:52:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:23.887 21:52:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:23.887 21:52:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.887 [2024-07-13 21:52:43.020292] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.887 Null1 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.887 [2024-07-13 21:52:43.061790] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.887 Null2 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.887 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.888 Null3 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.888 Null4 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.888 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:09:24.147 00:09:24.147 Discovery Log Number of Records 6, Generation counter 6 00:09:24.147 =====Discovery Log Entry 0====== 00:09:24.147 trtype: tcp 00:09:24.147 adrfam: ipv4 00:09:24.147 subtype: current discovery subsystem 00:09:24.147 treq: not required 00:09:24.147 portid: 0 00:09:24.147 trsvcid: 4420 00:09:24.147 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:24.147 traddr: 10.0.0.2 00:09:24.147 eflags: explicit discovery connections, duplicate discovery information 00:09:24.147 sectype: none 00:09:24.147 =====Discovery Log Entry 1====== 00:09:24.147 trtype: tcp 00:09:24.147 adrfam: ipv4 00:09:24.147 subtype: nvme subsystem 00:09:24.147 treq: not required 00:09:24.147 portid: 0 00:09:24.147 trsvcid: 4420 00:09:24.147 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:24.147 traddr: 10.0.0.2 00:09:24.147 eflags: none 00:09:24.147 sectype: none 00:09:24.147 =====Discovery Log Entry 2====== 00:09:24.147 trtype: tcp 00:09:24.147 adrfam: ipv4 00:09:24.147 subtype: nvme subsystem 00:09:24.147 treq: not required 00:09:24.147 portid: 0 00:09:24.147 trsvcid: 4420 00:09:24.147 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:24.147 traddr: 10.0.0.2 00:09:24.147 eflags: none 00:09:24.147 sectype: none 00:09:24.147 =====Discovery Log Entry 3====== 00:09:24.147 trtype: tcp 00:09:24.147 adrfam: ipv4 00:09:24.147 subtype: nvme subsystem 00:09:24.147 treq: not required 00:09:24.147 portid: 0 00:09:24.147 trsvcid: 4420 00:09:24.147 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:24.147 traddr: 10.0.0.2 00:09:24.147 eflags: none 00:09:24.147 sectype: none 00:09:24.147 =====Discovery Log Entry 4====== 00:09:24.147 trtype: tcp 00:09:24.147 adrfam: ipv4 00:09:24.147 subtype: nvme subsystem 00:09:24.147 treq: not required 00:09:24.147 portid: 0 00:09:24.147 trsvcid: 4420 00:09:24.147 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:24.147 traddr: 10.0.0.2 00:09:24.147 eflags: none 00:09:24.147 sectype: none 00:09:24.147 =====Discovery Log Entry 5====== 00:09:24.147 trtype: tcp 00:09:24.147 adrfam: ipv4 00:09:24.147 subtype: discovery subsystem referral 00:09:24.147 treq: not required 00:09:24.147 portid: 0 00:09:24.147 trsvcid: 4430 00:09:24.147 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:24.147 traddr: 10.0.0.2 00:09:24.147 eflags: none 00:09:24.147 sectype: none 00:09:24.147 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:24.147 Perform nvmf subsystem discovery via RPC 00:09:24.147 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:24.147 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:24.148 [ 00:09:24.148 { 00:09:24.148 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:24.148 "subtype": "Discovery", 00:09:24.148 "listen_addresses": [ 00:09:24.148 { 00:09:24.148 "trtype": "TCP", 00:09:24.148 "adrfam": "IPv4", 00:09:24.148 "traddr": "10.0.0.2", 00:09:24.148 "trsvcid": "4420" 00:09:24.148 } 00:09:24.148 ], 00:09:24.148 "allow_any_host": true, 00:09:24.148 "hosts": [] 00:09:24.148 }, 00:09:24.148 { 00:09:24.148 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.148 "subtype": "NVMe", 00:09:24.148 "listen_addresses": [ 00:09:24.148 { 00:09:24.148 "trtype": "TCP", 00:09:24.148 "adrfam": "IPv4", 00:09:24.148 "traddr": "10.0.0.2", 00:09:24.148 "trsvcid": "4420" 00:09:24.148 } 00:09:24.148 ], 00:09:24.148 "allow_any_host": true, 00:09:24.148 "hosts": [], 00:09:24.148 "serial_number": "SPDK00000000000001", 00:09:24.148 "model_number": "SPDK bdev Controller", 00:09:24.148 "max_namespaces": 32, 00:09:24.148 "min_cntlid": 1, 00:09:24.148 "max_cntlid": 65519, 00:09:24.148 "namespaces": [ 00:09:24.148 { 00:09:24.148 "nsid": 1, 00:09:24.148 "bdev_name": "Null1", 00:09:24.148 "name": "Null1", 00:09:24.148 "nguid": "3EA8E7E58A364AA5B0DA65CA88743DC1", 00:09:24.148 "uuid": "3ea8e7e5-8a36-4aa5-b0da-65ca88743dc1" 00:09:24.148 } 00:09:24.148 ] 00:09:24.148 }, 00:09:24.148 { 00:09:24.148 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:24.148 "subtype": "NVMe", 00:09:24.148 "listen_addresses": [ 00:09:24.148 { 00:09:24.148 "trtype": "TCP", 00:09:24.148 "adrfam": "IPv4", 00:09:24.148 "traddr": "10.0.0.2", 00:09:24.148 "trsvcid": "4420" 00:09:24.148 } 00:09:24.148 ], 00:09:24.148 "allow_any_host": true, 00:09:24.148 "hosts": [], 00:09:24.148 "serial_number": "SPDK00000000000002", 00:09:24.148 "model_number": "SPDK bdev Controller", 00:09:24.148 "max_namespaces": 32, 00:09:24.148 "min_cntlid": 1, 00:09:24.148 "max_cntlid": 65519, 00:09:24.148 "namespaces": [ 00:09:24.148 { 00:09:24.148 "nsid": 1, 00:09:24.148 "bdev_name": "Null2", 00:09:24.148 "name": "Null2", 00:09:24.148 "nguid": "BDCAFDAC268941D38124809293EE6EC2", 00:09:24.148 "uuid": "bdcafdac-2689-41d3-8124-809293ee6ec2" 00:09:24.148 } 00:09:24.148 ] 00:09:24.148 }, 00:09:24.148 { 00:09:24.148 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:24.148 "subtype": "NVMe", 00:09:24.148 "listen_addresses": [ 00:09:24.148 { 00:09:24.148 "trtype": "TCP", 00:09:24.148 "adrfam": "IPv4", 00:09:24.148 "traddr": "10.0.0.2", 00:09:24.148 "trsvcid": "4420" 00:09:24.148 } 00:09:24.148 ], 00:09:24.148 "allow_any_host": true, 00:09:24.148 "hosts": [], 00:09:24.148 "serial_number": "SPDK00000000000003", 00:09:24.148 "model_number": "SPDK bdev Controller", 00:09:24.148 "max_namespaces": 32, 00:09:24.148 "min_cntlid": 1, 00:09:24.148 "max_cntlid": 65519, 00:09:24.148 "namespaces": [ 00:09:24.148 { 00:09:24.148 "nsid": 1, 00:09:24.148 "bdev_name": "Null3", 00:09:24.148 "name": "Null3", 00:09:24.148 "nguid": "BE2E867D94D04BA596CC3055A2EBF837", 00:09:24.148 "uuid": "be2e867d-94d0-4ba5-96cc-3055a2ebf837" 00:09:24.148 } 00:09:24.148 ] 00:09:24.148 }, 00:09:24.148 { 00:09:24.148 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:24.148 "subtype": "NVMe", 00:09:24.148 "listen_addresses": [ 00:09:24.148 { 00:09:24.148 "trtype": "TCP", 00:09:24.148 "adrfam": "IPv4", 00:09:24.148 "traddr": "10.0.0.2", 00:09:24.148 "trsvcid": "4420" 00:09:24.148 } 00:09:24.148 ], 00:09:24.148 "allow_any_host": true, 00:09:24.148 "hosts": [], 00:09:24.148 "serial_number": "SPDK00000000000004", 00:09:24.148 "model_number": "SPDK bdev Controller", 00:09:24.148 "max_namespaces": 32, 00:09:24.148 "min_cntlid": 1, 00:09:24.148 "max_cntlid": 65519, 00:09:24.148 "namespaces": [ 00:09:24.148 { 00:09:24.148 "nsid": 1, 00:09:24.148 "bdev_name": "Null4", 00:09:24.148 "name": "Null4", 00:09:24.148 "nguid": "F1AB75F0E4544886A5728AD07E4063F0", 00:09:24.148 "uuid": "f1ab75f0-e454-4886-a572-8ad07e4063f0" 00:09:24.148 } 00:09:24.148 ] 00:09:24.148 } 00:09:24.148 ] 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:24.148 21:52:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:24.148 rmmod nvme_tcp 00:09:24.148 rmmod nvme_fabrics 00:09:24.407 rmmod nvme_keyring 00:09:24.407 21:52:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:24.407 21:52:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:09:24.407 21:52:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:09:24.407 21:52:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3970895 ']' 00:09:24.407 21:52:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3970895 00:09:24.407 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 3970895 ']' 00:09:24.407 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 3970895 00:09:24.407 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:09:24.407 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:24.407 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3970895 00:09:24.407 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:24.407 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:24.407 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3970895' 00:09:24.407 killing process with pid 3970895 00:09:24.407 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 3970895 00:09:24.407 21:52:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 3970895 00:09:25.787 21:52:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:25.787 21:52:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:25.787 21:52:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:25.787 21:52:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:25.787 21:52:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:25.787 21:52:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.787 21:52:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:25.787 21:52:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.694 21:52:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:27.694 00:09:27.694 real 0m7.192s 00:09:27.694 user 0m9.191s 00:09:27.694 sys 0m2.015s 00:09:27.694 21:52:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:27.694 21:52:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:27.694 ************************************ 00:09:27.694 END TEST nvmf_target_discovery 00:09:27.694 ************************************ 00:09:27.694 21:52:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:27.694 21:52:46 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:27.694 21:52:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:27.694 21:52:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.694 21:52:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:27.694 ************************************ 00:09:27.694 START TEST nvmf_referrals 00:09:27.694 ************************************ 00:09:27.695 21:52:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:27.695 * Looking for test storage... 00:09:27.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:09:27.695 21:52:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:30.230 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:30.230 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:30.230 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:30.230 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:30.230 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:30.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:09:30.231 00:09:30.231 --- 10.0.0.2 ping statistics --- 00:09:30.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.231 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:30.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:09:30.231 00:09:30.231 --- 10.0.0.1 ping statistics --- 00:09:30.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.231 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3973254 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3973254 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 3973254 ']' 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:30.231 21:52:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:30.231 [2024-07-13 21:52:49.334287] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:30.231 [2024-07-13 21:52:49.334438] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.231 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.231 [2024-07-13 21:52:49.475118] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.491 [2024-07-13 21:52:49.741427] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.491 [2024-07-13 21:52:49.741515] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.492 [2024-07-13 21:52:49.741544] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.492 [2024-07-13 21:52:49.741565] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.492 [2024-07-13 21:52:49.741588] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.492 [2024-07-13 21:52:49.741723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.492 [2024-07-13 21:52:49.741784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.492 [2024-07-13 21:52:49.741814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.492 [2024-07-13 21:52:49.741829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.059 [2024-07-13 21:52:50.316416] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.059 [2024-07-13 21:52:50.329330] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:31.059 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:31.317 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:31.317 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:31.317 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:31.317 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.317 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.317 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.317 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:31.317 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.318 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.318 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.318 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:31.318 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.318 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.318 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.318 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:31.318 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:31.318 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.318 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.318 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.318 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:31.318 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:31.318 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:31.318 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:31.318 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:31.318 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:31.318 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:31.577 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:31.577 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:31.577 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:31.577 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.577 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.577 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.577 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:31.577 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.577 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.577 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.577 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:31.577 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:31.577 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:31.577 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.577 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:31.577 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.577 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:31.577 21:52:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.577 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:31.577 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:31.577 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:31.577 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:31.577 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:31.577 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:31.577 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:31.577 21:52:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:31.836 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:31.836 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:31.836 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:31.836 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:31.836 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:31.836 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:31.836 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:31.836 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:31.836 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:31.836 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:31.836 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:31.836 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:31.836 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:31.836 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:31.836 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:31.836 21:52:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.836 21:52:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.836 21:52:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.836 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:31.836 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:31.836 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:31.836 21:52:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.836 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:31.836 21:52:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:31.836 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:32.094 21:52:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.094 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:32.094 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:32.094 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:32.094 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:32.094 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:32.094 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:32.095 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:32.095 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:32.095 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:32.095 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:32.095 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:32.095 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:32.095 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:32.095 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:32.095 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:32.095 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:32.095 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:32.095 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:32.095 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:32.095 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:32.095 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:32.353 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:32.353 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:32.353 21:52:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.353 21:52:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.353 21:52:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.353 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:32.353 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:32.353 21:52:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.353 21:52:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:32.353 21:52:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.353 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:32.353 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:32.353 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:32.353 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:32.353 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:32.353 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:32.353 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:32.353 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:32.353 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:32.353 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:32.353 21:52:51 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:32.353 21:52:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:32.353 21:52:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:32.353 21:52:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:32.353 21:52:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:32.353 21:52:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:32.353 21:52:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:32.353 rmmod nvme_tcp 00:09:32.353 rmmod nvme_fabrics 00:09:32.613 rmmod nvme_keyring 00:09:32.613 21:52:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:32.613 21:52:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:32.613 21:52:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:32.613 21:52:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3973254 ']' 00:09:32.613 21:52:51 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3973254 00:09:32.613 21:52:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 3973254 ']' 00:09:32.613 21:52:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 3973254 00:09:32.613 21:52:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:09:32.613 21:52:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:32.613 21:52:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3973254 00:09:32.613 21:52:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:32.613 21:52:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:32.613 21:52:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3973254' 00:09:32.613 killing process with pid 3973254 00:09:32.613 21:52:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 3973254 00:09:32.613 21:52:51 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 3973254 00:09:33.993 21:52:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:33.993 21:52:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:33.993 21:52:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:33.993 21:52:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:33.993 21:52:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:33.993 21:52:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.993 21:52:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.993 21:52:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.960 21:52:55 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:35.960 00:09:35.960 real 0m8.096s 00:09:35.960 user 0m13.298s 00:09:35.960 sys 0m2.319s 00:09:35.960 21:52:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:35.960 21:52:55 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:35.960 ************************************ 00:09:35.960 END TEST nvmf_referrals 00:09:35.960 ************************************ 00:09:35.960 21:52:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:35.960 21:52:55 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:35.960 21:52:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:35.960 21:52:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:35.961 21:52:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:35.961 ************************************ 00:09:35.961 START TEST nvmf_connect_disconnect 00:09:35.961 ************************************ 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:35.961 * Looking for test storage... 00:09:35.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:35.961 21:52:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:37.867 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:37.867 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:37.867 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:37.867 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:37.867 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:38.127 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:38.127 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:38.127 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:38.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:09:38.127 00:09:38.127 --- 10.0.0.2 ping statistics --- 00:09:38.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.127 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:09:38.127 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:38.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:09:38.127 00:09:38.127 --- 10.0.0.1 ping statistics --- 00:09:38.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.127 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:09:38.127 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.127 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:38.127 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:38.127 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.127 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:38.127 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:38.127 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.127 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:38.127 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:38.127 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:38.127 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:38.127 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:38.127 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:38.127 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3975683 00:09:38.127 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:38.127 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3975683 00:09:38.127 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 3975683 ']' 00:09:38.127 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.127 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:38.127 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.127 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:38.127 21:52:57 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:38.127 [2024-07-13 21:52:57.423499] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:38.127 [2024-07-13 21:52:57.423663] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.127 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.387 [2024-07-13 21:52:57.585459] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.647 [2024-07-13 21:52:57.854190] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.647 [2024-07-13 21:52:57.854271] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.647 [2024-07-13 21:52:57.854300] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.647 [2024-07-13 21:52:57.854320] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.647 [2024-07-13 21:52:57.854341] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.647 [2024-07-13 21:52:57.854485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.647 [2024-07-13 21:52:57.854545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.647 [2024-07-13 21:52:57.854576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.647 [2024-07-13 21:52:57.854591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:39.216 [2024-07-13 21:52:58.398513] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:39.216 [2024-07-13 21:52:58.511776] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:39.216 21:52:58 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:41.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.675 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.634 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.063 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.337 21:56:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:33.337 21:56:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:33.337 21:56:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:33.337 21:56:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:13:33.337 21:56:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:33.337 21:56:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:13:33.337 21:56:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:33.337 21:56:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:33.337 rmmod nvme_tcp 00:13:33.337 rmmod nvme_fabrics 00:13:33.337 rmmod nvme_keyring 00:13:33.337 21:56:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:33.337 21:56:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:13:33.337 21:56:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:13:33.337 21:56:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3975683 ']' 00:13:33.337 21:56:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3975683 00:13:33.337 21:56:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3975683 ']' 00:13:33.337 21:56:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 3975683 00:13:33.337 21:56:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:13:33.337 21:56:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:33.337 21:56:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3975683 00:13:33.337 21:56:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:33.337 21:56:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:33.337 21:56:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3975683' 00:13:33.337 killing process with pid 3975683 00:13:33.337 21:56:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 3975683 00:13:33.337 21:56:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 3975683 00:13:34.712 21:56:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:34.712 21:56:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:34.712 21:56:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:34.712 21:56:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:34.712 21:56:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:34.712 21:56:53 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.712 21:56:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.712 21:56:53 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.613 21:56:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:36.613 00:13:36.613 real 4m0.689s 00:13:36.613 user 15m9.432s 00:13:36.613 sys 0m37.862s 00:13:36.613 21:56:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:36.613 21:56:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:36.613 ************************************ 00:13:36.613 END TEST nvmf_connect_disconnect 00:13:36.613 ************************************ 00:13:36.613 21:56:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:36.613 21:56:55 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:36.613 21:56:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:36.613 21:56:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:36.613 21:56:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:36.613 ************************************ 00:13:36.613 START TEST nvmf_multitarget 00:13:36.613 ************************************ 00:13:36.613 21:56:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:36.613 * Looking for test storage... 00:13:36.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:13:36.614 21:56:55 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:39.143 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:39.143 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:39.143 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:39.144 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:39.144 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:39.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:39.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:13:39.144 00:13:39.144 --- 10.0.0.2 ping statistics --- 00:13:39.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.144 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:39.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:39.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:13:39.144 00:13:39.144 --- 10.0.0.1 ping statistics --- 00:13:39.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.144 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=4007226 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 4007226 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 4007226 ']' 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:39.144 21:56:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:39.144 [2024-07-13 21:56:58.286518] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:39.144 [2024-07-13 21:56:58.286659] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.144 EAL: No free 2048 kB hugepages reported on node 1 00:13:39.144 [2024-07-13 21:56:58.430544] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:39.402 [2024-07-13 21:56:58.700347] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.402 [2024-07-13 21:56:58.700436] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.402 [2024-07-13 21:56:58.700465] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:39.402 [2024-07-13 21:56:58.700486] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:39.402 [2024-07-13 21:56:58.700510] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.402 [2024-07-13 21:56:58.700643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.402 [2024-07-13 21:56:58.700699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.402 [2024-07-13 21:56:58.700731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.402 [2024-07-13 21:56:58.700744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:39.967 21:56:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:39.967 21:56:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:13:39.967 21:56:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:39.967 21:56:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:39.967 21:56:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:39.967 21:56:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.967 21:56:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:39.967 21:56:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:39.967 21:56:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:39.967 21:56:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:39.967 21:56:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:40.224 "nvmf_tgt_1" 00:13:40.224 21:56:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:40.224 "nvmf_tgt_2" 00:13:40.224 21:56:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:40.224 21:56:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:40.482 21:56:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:40.482 21:56:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:40.482 true 00:13:40.482 21:56:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:40.739 true 00:13:40.740 21:56:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:40.740 21:56:59 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:40.740 21:57:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:40.740 21:57:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:40.740 21:57:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:40.740 21:57:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:40.740 21:57:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:13:40.740 21:57:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:40.740 21:57:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:13:40.740 21:57:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:40.740 21:57:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:40.740 rmmod nvme_tcp 00:13:40.740 rmmod nvme_fabrics 00:13:40.740 rmmod nvme_keyring 00:13:40.740 21:57:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:40.740 21:57:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:13:40.740 21:57:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:13:40.740 21:57:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 4007226 ']' 00:13:40.740 21:57:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 4007226 00:13:40.740 21:57:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 4007226 ']' 00:13:40.740 21:57:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 4007226 00:13:40.740 21:57:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:13:40.740 21:57:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:40.740 21:57:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4007226 00:13:40.740 21:57:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:40.740 21:57:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:40.740 21:57:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4007226' 00:13:40.740 killing process with pid 4007226 00:13:40.740 21:57:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 4007226 00:13:40.740 21:57:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 4007226 00:13:42.113 21:57:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:42.113 21:57:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:42.113 21:57:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:42.113 21:57:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:42.113 21:57:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:42.114 21:57:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.114 21:57:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:42.114 21:57:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.036 21:57:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:44.311 00:13:44.311 real 0m7.540s 00:13:44.311 user 0m11.339s 00:13:44.311 sys 0m2.160s 00:13:44.311 21:57:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:44.311 21:57:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:44.311 ************************************ 00:13:44.311 END TEST nvmf_multitarget 00:13:44.311 ************************************ 00:13:44.311 21:57:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:44.311 21:57:03 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:44.311 21:57:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:44.311 21:57:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:44.311 21:57:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:44.311 ************************************ 00:13:44.311 START TEST nvmf_rpc 00:13:44.311 ************************************ 00:13:44.311 21:57:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:44.311 * Looking for test storage... 00:13:44.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:44.311 21:57:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:44.311 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:44.311 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.311 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.311 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.311 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.311 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.311 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.311 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.311 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.311 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.311 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.311 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:44.311 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:44.311 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.311 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.311 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:44.311 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:13:44.312 21:57:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:46.213 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:46.214 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:46.214 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:46.214 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:46.214 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:46.214 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:46.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:46.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:13:46.214 00:13:46.214 --- 10.0.0.2 ping statistics --- 00:13:46.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.214 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:13:46.473 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:46.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:46.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:13:46.473 00:13:46.473 --- 10.0.0.1 ping statistics --- 00:13:46.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.473 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:13:46.473 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:46.473 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:13:46.473 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:46.473 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:46.473 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:46.473 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:46.473 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:46.473 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:46.473 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:46.473 21:57:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:46.473 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:46.473 21:57:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:46.473 21:57:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.473 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=4009574 00:13:46.473 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 4009574 00:13:46.473 21:57:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:46.473 21:57:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 4009574 ']' 00:13:46.473 21:57:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.473 21:57:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:46.473 21:57:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.473 21:57:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:46.473 21:57:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.473 [2024-07-13 21:57:05.729527] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:46.473 [2024-07-13 21:57:05.729674] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.473 EAL: No free 2048 kB hugepages reported on node 1 00:13:46.731 [2024-07-13 21:57:05.872327] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:46.990 [2024-07-13 21:57:06.142306] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.990 [2024-07-13 21:57:06.142385] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.990 [2024-07-13 21:57:06.142414] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:46.990 [2024-07-13 21:57:06.142434] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:46.990 [2024-07-13 21:57:06.142455] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.990 [2024-07-13 21:57:06.142580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.990 [2024-07-13 21:57:06.142638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.990 [2024-07-13 21:57:06.142669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.990 [2024-07-13 21:57:06.142683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:47.249 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:47.249 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:13:47.249 21:57:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:47.249 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:47.249 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.507 21:57:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.507 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:47.507 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.507 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.507 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.507 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:47.507 "tick_rate": 2700000000, 00:13:47.507 "poll_groups": [ 00:13:47.507 { 00:13:47.507 "name": "nvmf_tgt_poll_group_000", 00:13:47.507 "admin_qpairs": 0, 00:13:47.507 "io_qpairs": 0, 00:13:47.507 "current_admin_qpairs": 0, 00:13:47.507 "current_io_qpairs": 0, 00:13:47.507 "pending_bdev_io": 0, 00:13:47.507 "completed_nvme_io": 0, 00:13:47.507 "transports": [] 00:13:47.507 }, 00:13:47.507 { 00:13:47.507 "name": "nvmf_tgt_poll_group_001", 00:13:47.507 "admin_qpairs": 0, 00:13:47.507 "io_qpairs": 0, 00:13:47.507 "current_admin_qpairs": 0, 00:13:47.507 "current_io_qpairs": 0, 00:13:47.507 "pending_bdev_io": 0, 00:13:47.507 "completed_nvme_io": 0, 00:13:47.507 "transports": [] 00:13:47.507 }, 00:13:47.507 { 00:13:47.507 "name": "nvmf_tgt_poll_group_002", 00:13:47.508 "admin_qpairs": 0, 00:13:47.508 "io_qpairs": 0, 00:13:47.508 "current_admin_qpairs": 0, 00:13:47.508 "current_io_qpairs": 0, 00:13:47.508 "pending_bdev_io": 0, 00:13:47.508 "completed_nvme_io": 0, 00:13:47.508 "transports": [] 00:13:47.508 }, 00:13:47.508 { 00:13:47.508 "name": "nvmf_tgt_poll_group_003", 00:13:47.508 "admin_qpairs": 0, 00:13:47.508 "io_qpairs": 0, 00:13:47.508 "current_admin_qpairs": 0, 00:13:47.508 "current_io_qpairs": 0, 00:13:47.508 "pending_bdev_io": 0, 00:13:47.508 "completed_nvme_io": 0, 00:13:47.508 "transports": [] 00:13:47.508 } 00:13:47.508 ] 00:13:47.508 }' 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.508 [2024-07-13 21:57:06.752473] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:47.508 "tick_rate": 2700000000, 00:13:47.508 "poll_groups": [ 00:13:47.508 { 00:13:47.508 "name": "nvmf_tgt_poll_group_000", 00:13:47.508 "admin_qpairs": 0, 00:13:47.508 "io_qpairs": 0, 00:13:47.508 "current_admin_qpairs": 0, 00:13:47.508 "current_io_qpairs": 0, 00:13:47.508 "pending_bdev_io": 0, 00:13:47.508 "completed_nvme_io": 0, 00:13:47.508 "transports": [ 00:13:47.508 { 00:13:47.508 "trtype": "TCP" 00:13:47.508 } 00:13:47.508 ] 00:13:47.508 }, 00:13:47.508 { 00:13:47.508 "name": "nvmf_tgt_poll_group_001", 00:13:47.508 "admin_qpairs": 0, 00:13:47.508 "io_qpairs": 0, 00:13:47.508 "current_admin_qpairs": 0, 00:13:47.508 "current_io_qpairs": 0, 00:13:47.508 "pending_bdev_io": 0, 00:13:47.508 "completed_nvme_io": 0, 00:13:47.508 "transports": [ 00:13:47.508 { 00:13:47.508 "trtype": "TCP" 00:13:47.508 } 00:13:47.508 ] 00:13:47.508 }, 00:13:47.508 { 00:13:47.508 "name": "nvmf_tgt_poll_group_002", 00:13:47.508 "admin_qpairs": 0, 00:13:47.508 "io_qpairs": 0, 00:13:47.508 "current_admin_qpairs": 0, 00:13:47.508 "current_io_qpairs": 0, 00:13:47.508 "pending_bdev_io": 0, 00:13:47.508 "completed_nvme_io": 0, 00:13:47.508 "transports": [ 00:13:47.508 { 00:13:47.508 "trtype": "TCP" 00:13:47.508 } 00:13:47.508 ] 00:13:47.508 }, 00:13:47.508 { 00:13:47.508 "name": "nvmf_tgt_poll_group_003", 00:13:47.508 "admin_qpairs": 0, 00:13:47.508 "io_qpairs": 0, 00:13:47.508 "current_admin_qpairs": 0, 00:13:47.508 "current_io_qpairs": 0, 00:13:47.508 "pending_bdev_io": 0, 00:13:47.508 "completed_nvme_io": 0, 00:13:47.508 "transports": [ 00:13:47.508 { 00:13:47.508 "trtype": "TCP" 00:13:47.508 } 00:13:47.508 ] 00:13:47.508 } 00:13:47.508 ] 00:13:47.508 }' 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.508 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.767 Malloc1 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.767 [2024-07-13 21:57:06.959660] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:13:47.767 21:57:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:47.767 [2024-07-13 21:57:06.982927] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:13:47.767 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:47.767 could not add new controller: failed to write to nvme-fabrics device 00:13:47.767 21:57:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:47.767 21:57:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:47.767 21:57:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:47.767 21:57:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:47.767 21:57:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:47.767 21:57:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.767 21:57:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.767 21:57:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.767 21:57:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:48.333 21:57:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:48.333 21:57:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:48.333 21:57:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:48.333 21:57:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:48.333 21:57:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:50.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:50.859 [2024-07-13 21:57:09.876122] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:13:50.859 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:50.859 could not add new controller: failed to write to nvme-fabrics device 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.859 21:57:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:51.426 21:57:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:51.426 21:57:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:51.426 21:57:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:51.426 21:57:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:51.426 21:57:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:53.329 21:57:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:53.329 21:57:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:53.329 21:57:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:53.329 21:57:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:53.329 21:57:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:53.329 21:57:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:53.329 21:57:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:53.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.590 [2024-07-13 21:57:12.827420] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.590 21:57:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:54.158 21:57:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:54.158 21:57:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:54.158 21:57:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:54.158 21:57:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:54.158 21:57:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:56.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.684 [2024-07-13 21:57:15.661204] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.684 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.685 21:57:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:56.685 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.685 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.685 21:57:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.685 21:57:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:57.252 21:57:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:57.252 21:57:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:57.252 21:57:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:57.252 21:57:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:57.252 21:57:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:59.153 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:59.153 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:59.153 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:59.153 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:59.153 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:59.153 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:59.153 21:57:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:59.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.153 21:57:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:59.153 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:59.153 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:59.153 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.413 [2024-07-13 21:57:18.588255] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.413 21:57:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:59.983 21:57:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:59.983 21:57:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:59.983 21:57:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:59.983 21:57:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:59.983 21:57:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:02.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.553 [2024-07-13 21:57:21.527432] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.553 21:57:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:02.811 21:57:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:02.811 21:57:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:02.811 21:57:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:02.811 21:57:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:02.811 21:57:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:05.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.335 [2024-07-13 21:57:24.419661] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.335 21:57:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:05.900 21:57:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:05.900 21:57:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:05.900 21:57:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:05.900 21:57:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:05.900 21:57:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:07.796 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:07.796 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:07.796 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:07.796 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:07.796 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:07.796 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:07.796 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:08.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.054 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:08.054 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:08.054 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:08.054 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:08.054 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:08.054 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:08.054 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:08.054 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:08.054 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.054 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.054 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.054 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:08.054 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.054 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.054 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.054 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:08.054 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:08.054 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:08.054 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.054 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.055 [2024-07-13 21:57:27.350768] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.055 [2024-07-13 21:57:27.398822] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.055 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.055 [2024-07-13 21:57:27.447043] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.313 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.313 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:08.313 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.313 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.313 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.313 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:08.313 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.313 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.313 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.313 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.313 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.313 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.313 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.313 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:08.313 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.313 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.313 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.313 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:08.313 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:08.313 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.314 [2024-07-13 21:57:27.495250] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.314 [2024-07-13 21:57:27.543388] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:08.314 "tick_rate": 2700000000, 00:14:08.314 "poll_groups": [ 00:14:08.314 { 00:14:08.314 "name": "nvmf_tgt_poll_group_000", 00:14:08.314 "admin_qpairs": 2, 00:14:08.314 "io_qpairs": 84, 00:14:08.314 "current_admin_qpairs": 0, 00:14:08.314 "current_io_qpairs": 0, 00:14:08.314 "pending_bdev_io": 0, 00:14:08.314 "completed_nvme_io": 184, 00:14:08.314 "transports": [ 00:14:08.314 { 00:14:08.314 "trtype": "TCP" 00:14:08.314 } 00:14:08.314 ] 00:14:08.314 }, 00:14:08.314 { 00:14:08.314 "name": "nvmf_tgt_poll_group_001", 00:14:08.314 "admin_qpairs": 2, 00:14:08.314 "io_qpairs": 84, 00:14:08.314 "current_admin_qpairs": 0, 00:14:08.314 "current_io_qpairs": 0, 00:14:08.314 "pending_bdev_io": 0, 00:14:08.314 "completed_nvme_io": 168, 00:14:08.314 "transports": [ 00:14:08.314 { 00:14:08.314 "trtype": "TCP" 00:14:08.314 } 00:14:08.314 ] 00:14:08.314 }, 00:14:08.314 { 00:14:08.314 "name": "nvmf_tgt_poll_group_002", 00:14:08.314 "admin_qpairs": 1, 00:14:08.314 "io_qpairs": 84, 00:14:08.314 "current_admin_qpairs": 0, 00:14:08.314 "current_io_qpairs": 0, 00:14:08.314 "pending_bdev_io": 0, 00:14:08.314 "completed_nvme_io": 183, 00:14:08.314 "transports": [ 00:14:08.314 { 00:14:08.314 "trtype": "TCP" 00:14:08.314 } 00:14:08.314 ] 00:14:08.314 }, 00:14:08.314 { 00:14:08.314 "name": "nvmf_tgt_poll_group_003", 00:14:08.314 "admin_qpairs": 2, 00:14:08.314 "io_qpairs": 84, 00:14:08.314 "current_admin_qpairs": 0, 00:14:08.314 "current_io_qpairs": 0, 00:14:08.314 "pending_bdev_io": 0, 00:14:08.314 "completed_nvme_io": 151, 00:14:08.314 "transports": [ 00:14:08.314 { 00:14:08.314 "trtype": "TCP" 00:14:08.314 } 00:14:08.314 ] 00:14:08.314 } 00:14:08.314 ] 00:14:08.314 }' 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:08.314 21:57:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:08.314 rmmod nvme_tcp 00:14:08.314 rmmod nvme_fabrics 00:14:08.573 rmmod nvme_keyring 00:14:08.573 21:57:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:08.573 21:57:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:14:08.573 21:57:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:14:08.573 21:57:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 4009574 ']' 00:14:08.573 21:57:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 4009574 00:14:08.573 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 4009574 ']' 00:14:08.573 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 4009574 00:14:08.573 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:14:08.573 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:08.573 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4009574 00:14:08.573 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:08.573 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:08.573 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4009574' 00:14:08.573 killing process with pid 4009574 00:14:08.573 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 4009574 00:14:08.573 21:57:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 4009574 00:14:09.949 21:57:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:09.949 21:57:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:09.949 21:57:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:09.949 21:57:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:09.949 21:57:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:09.949 21:57:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.949 21:57:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.949 21:57:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.858 21:57:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:11.858 00:14:11.858 real 0m27.777s 00:14:11.858 user 1m29.187s 00:14:11.858 sys 0m4.433s 00:14:11.858 21:57:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:11.858 21:57:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.858 ************************************ 00:14:11.858 END TEST nvmf_rpc 00:14:11.858 ************************************ 00:14:12.115 21:57:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:12.115 21:57:31 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:12.115 21:57:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:12.115 21:57:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:12.115 21:57:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:12.115 ************************************ 00:14:12.115 START TEST nvmf_invalid 00:14:12.115 ************************************ 00:14:12.115 21:57:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:12.115 * Looking for test storage... 00:14:12.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:12.115 21:57:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:12.115 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:12.115 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.115 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:14:12.116 21:57:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:14.646 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:14.647 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:14.647 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:14.647 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:14.647 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:14.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:14.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:14:14.647 00:14:14.647 --- 10.0.0.2 ping statistics --- 00:14:14.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.647 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:14.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:14.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:14:14.647 00:14:14.647 --- 10.0.0.1 ping statistics --- 00:14:14.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.647 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=4014955 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 4014955 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 4014955 ']' 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:14.647 21:57:33 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:14.647 [2024-07-13 21:57:33.671075] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:14.647 [2024-07-13 21:57:33.671216] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.647 EAL: No free 2048 kB hugepages reported on node 1 00:14:14.647 [2024-07-13 21:57:33.807550] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:14.906 [2024-07-13 21:57:34.073330] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.906 [2024-07-13 21:57:34.073406] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.906 [2024-07-13 21:57:34.073435] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.906 [2024-07-13 21:57:34.073456] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.906 [2024-07-13 21:57:34.073476] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.906 [2024-07-13 21:57:34.073599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.906 [2024-07-13 21:57:34.073656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.906 [2024-07-13 21:57:34.073685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.906 [2024-07-13 21:57:34.073699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:15.487 21:57:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:15.488 21:57:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:14:15.488 21:57:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:15.488 21:57:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:15.488 21:57:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:15.488 21:57:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.488 21:57:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:15.488 21:57:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode3520 00:14:15.488 [2024-07-13 21:57:34.862702] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:15.745 21:57:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:15.745 { 00:14:15.745 "nqn": "nqn.2016-06.io.spdk:cnode3520", 00:14:15.745 "tgt_name": "foobar", 00:14:15.745 "method": "nvmf_create_subsystem", 00:14:15.745 "req_id": 1 00:14:15.745 } 00:14:15.745 Got JSON-RPC error response 00:14:15.745 response: 00:14:15.745 { 00:14:15.745 "code": -32603, 00:14:15.745 "message": "Unable to find target foobar" 00:14:15.745 }' 00:14:15.745 21:57:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:15.745 { 00:14:15.745 "nqn": "nqn.2016-06.io.spdk:cnode3520", 00:14:15.745 "tgt_name": "foobar", 00:14:15.745 "method": "nvmf_create_subsystem", 00:14:15.745 "req_id": 1 00:14:15.745 } 00:14:15.745 Got JSON-RPC error response 00:14:15.745 response: 00:14:15.745 { 00:14:15.745 "code": -32603, 00:14:15.745 "message": "Unable to find target foobar" 00:14:15.745 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:15.745 21:57:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:15.745 21:57:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode16607 00:14:15.745 [2024-07-13 21:57:35.111590] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16607: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:15.745 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:15.745 { 00:14:15.745 "nqn": "nqn.2016-06.io.spdk:cnode16607", 00:14:15.745 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:15.745 "method": "nvmf_create_subsystem", 00:14:15.745 "req_id": 1 00:14:15.745 } 00:14:15.745 Got JSON-RPC error response 00:14:15.745 response: 00:14:15.745 { 00:14:15.745 "code": -32602, 00:14:15.745 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:15.745 }' 00:14:15.745 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:15.745 { 00:14:15.745 "nqn": "nqn.2016-06.io.spdk:cnode16607", 00:14:15.745 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:15.745 "method": "nvmf_create_subsystem", 00:14:15.745 "req_id": 1 00:14:15.745 } 00:14:15.745 Got JSON-RPC error response 00:14:15.745 response: 00:14:15.745 { 00:14:15.745 "code": -32602, 00:14:15.745 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:15.745 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:15.745 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:15.745 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode6052 00:14:16.004 [2024-07-13 21:57:35.356391] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6052: invalid model number 'SPDK_Controller' 00:14:16.004 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:16.004 { 00:14:16.004 "nqn": "nqn.2016-06.io.spdk:cnode6052", 00:14:16.004 "model_number": "SPDK_Controller\u001f", 00:14:16.004 "method": "nvmf_create_subsystem", 00:14:16.004 "req_id": 1 00:14:16.004 } 00:14:16.004 Got JSON-RPC error response 00:14:16.004 response: 00:14:16.004 { 00:14:16.004 "code": -32602, 00:14:16.004 "message": "Invalid MN SPDK_Controller\u001f" 00:14:16.004 }' 00:14:16.004 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:16.004 { 00:14:16.004 "nqn": "nqn.2016-06.io.spdk:cnode6052", 00:14:16.004 "model_number": "SPDK_Controller\u001f", 00:14:16.004 "method": "nvmf_create_subsystem", 00:14:16.004 "req_id": 1 00:14:16.004 } 00:14:16.004 Got JSON-RPC error response 00:14:16.004 response: 00:14:16.004 { 00:14:16.004 "code": -32602, 00:14:16.004 "message": "Invalid MN SPDK_Controller\u001f" 00:14:16.004 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:16.004 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.005 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.263 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:16.263 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:16.263 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:16.263 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.263 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.263 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:16.263 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:16.263 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:16.263 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.263 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.263 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:16.263 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:16.263 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ @ == \- ]] 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '@avhG(9FvhBucj&9dz_g,' 00:14:16.264 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '@avhG(9FvhBucj&9dz_g,' nqn.2016-06.io.spdk:cnode24007 00:14:16.527 [2024-07-13 21:57:35.669458] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24007: invalid serial number '@avhG(9FvhBucj&9dz_g,' 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:16.527 { 00:14:16.527 "nqn": "nqn.2016-06.io.spdk:cnode24007", 00:14:16.527 "serial_number": "@avhG(9FvhBucj&9dz_g,", 00:14:16.527 "method": "nvmf_create_subsystem", 00:14:16.527 "req_id": 1 00:14:16.527 } 00:14:16.527 Got JSON-RPC error response 00:14:16.527 response: 00:14:16.527 { 00:14:16.527 "code": -32602, 00:14:16.527 "message": "Invalid SN @avhG(9FvhBucj&9dz_g," 00:14:16.527 }' 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:16.527 { 00:14:16.527 "nqn": "nqn.2016-06.io.spdk:cnode24007", 00:14:16.527 "serial_number": "@avhG(9FvhBucj&9dz_g,", 00:14:16.527 "method": "nvmf_create_subsystem", 00:14:16.527 "req_id": 1 00:14:16.527 } 00:14:16.527 Got JSON-RPC error response 00:14:16.527 response: 00:14:16.527 { 00:14:16.527 "code": -32602, 00:14:16.527 "message": "Invalid SN @avhG(9FvhBucj&9dz_g," 00:14:16.527 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:16.527 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ C == \- ]] 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'C`B?"IjG1bWV8y4\MZ[E~G:WvrVMu?iB>mxu*w}o=' 00:14:16.528 21:57:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'C`B?"IjG1bWV8y4\MZ[E~G:WvrVMu?iB>mxu*w}o=' nqn.2016-06.io.spdk:cnode22661 00:14:16.788 [2024-07-13 21:57:36.062783] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22661: invalid model number 'C`B?"IjG1bWV8y4\MZ[E~G:WvrVMu?iB>mxu*w}o=' 00:14:16.788 21:57:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:16.788 { 00:14:16.788 "nqn": "nqn.2016-06.io.spdk:cnode22661", 00:14:16.788 "model_number": "C`B?\"IjG1bWV8y4\\MZ[E~G:WvrVMu?iB>mxu*w}o=", 00:14:16.788 "method": "nvmf_create_subsystem", 00:14:16.788 "req_id": 1 00:14:16.788 } 00:14:16.788 Got JSON-RPC error response 00:14:16.788 response: 00:14:16.788 { 00:14:16.788 "code": -32602, 00:14:16.788 "message": "Invalid MN C`B?\"IjG1bWV8y4\\MZ[E~G:WvrVMu?iB>mxu*w}o=" 00:14:16.788 }' 00:14:16.788 21:57:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:16.788 { 00:14:16.788 "nqn": "nqn.2016-06.io.spdk:cnode22661", 00:14:16.788 "model_number": "C`B?\"IjG1bWV8y4\\MZ[E~G:WvrVMu?iB>mxu*w}o=", 00:14:16.788 "method": "nvmf_create_subsystem", 00:14:16.788 "req_id": 1 00:14:16.788 } 00:14:16.788 Got JSON-RPC error response 00:14:16.788 response: 00:14:16.788 { 00:14:16.788 "code": -32602, 00:14:16.788 "message": "Invalid MN C`B?\"IjG1bWV8y4\\MZ[E~G:WvrVMu?iB>mxu*w}o=" 00:14:16.788 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:16.788 21:57:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:17.046 [2024-07-13 21:57:36.303734] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.046 21:57:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:17.303 21:57:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:17.303 21:57:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:17.303 21:57:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:17.303 21:57:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:17.303 21:57:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:17.561 [2024-07-13 21:57:36.810569] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:17.561 21:57:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:17.561 { 00:14:17.561 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:17.561 "listen_address": { 00:14:17.561 "trtype": "tcp", 00:14:17.561 "traddr": "", 00:14:17.561 "trsvcid": "4421" 00:14:17.561 }, 00:14:17.561 "method": "nvmf_subsystem_remove_listener", 00:14:17.561 "req_id": 1 00:14:17.561 } 00:14:17.561 Got JSON-RPC error response 00:14:17.561 response: 00:14:17.561 { 00:14:17.561 "code": -32602, 00:14:17.561 "message": "Invalid parameters" 00:14:17.561 }' 00:14:17.561 21:57:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:17.561 { 00:14:17.561 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:17.561 "listen_address": { 00:14:17.561 "trtype": "tcp", 00:14:17.561 "traddr": "", 00:14:17.561 "trsvcid": "4421" 00:14:17.561 }, 00:14:17.561 "method": "nvmf_subsystem_remove_listener", 00:14:17.561 "req_id": 1 00:14:17.561 } 00:14:17.561 Got JSON-RPC error response 00:14:17.561 response: 00:14:17.561 { 00:14:17.561 "code": -32602, 00:14:17.561 "message": "Invalid parameters" 00:14:17.561 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:17.561 21:57:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17929 -i 0 00:14:17.818 [2024-07-13 21:57:37.051357] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17929: invalid cntlid range [0-65519] 00:14:17.818 21:57:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:17.818 { 00:14:17.818 "nqn": "nqn.2016-06.io.spdk:cnode17929", 00:14:17.818 "min_cntlid": 0, 00:14:17.818 "method": "nvmf_create_subsystem", 00:14:17.818 "req_id": 1 00:14:17.818 } 00:14:17.818 Got JSON-RPC error response 00:14:17.818 response: 00:14:17.818 { 00:14:17.818 "code": -32602, 00:14:17.819 "message": "Invalid cntlid range [0-65519]" 00:14:17.819 }' 00:14:17.819 21:57:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:17.819 { 00:14:17.819 "nqn": "nqn.2016-06.io.spdk:cnode17929", 00:14:17.819 "min_cntlid": 0, 00:14:17.819 "method": "nvmf_create_subsystem", 00:14:17.819 "req_id": 1 00:14:17.819 } 00:14:17.819 Got JSON-RPC error response 00:14:17.819 response: 00:14:17.819 { 00:14:17.819 "code": -32602, 00:14:17.819 "message": "Invalid cntlid range [0-65519]" 00:14:17.819 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:17.819 21:57:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14048 -i 65520 00:14:18.077 [2024-07-13 21:57:37.300208] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14048: invalid cntlid range [65520-65519] 00:14:18.077 21:57:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:18.077 { 00:14:18.077 "nqn": "nqn.2016-06.io.spdk:cnode14048", 00:14:18.077 "min_cntlid": 65520, 00:14:18.077 "method": "nvmf_create_subsystem", 00:14:18.077 "req_id": 1 00:14:18.077 } 00:14:18.077 Got JSON-RPC error response 00:14:18.077 response: 00:14:18.077 { 00:14:18.077 "code": -32602, 00:14:18.077 "message": "Invalid cntlid range [65520-65519]" 00:14:18.077 }' 00:14:18.077 21:57:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:18.077 { 00:14:18.077 "nqn": "nqn.2016-06.io.spdk:cnode14048", 00:14:18.077 "min_cntlid": 65520, 00:14:18.077 "method": "nvmf_create_subsystem", 00:14:18.077 "req_id": 1 00:14:18.077 } 00:14:18.077 Got JSON-RPC error response 00:14:18.077 response: 00:14:18.077 { 00:14:18.077 "code": -32602, 00:14:18.077 "message": "Invalid cntlid range [65520-65519]" 00:14:18.077 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:18.077 21:57:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20976 -I 0 00:14:18.336 [2024-07-13 21:57:37.553056] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20976: invalid cntlid range [1-0] 00:14:18.336 21:57:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:18.336 { 00:14:18.336 "nqn": "nqn.2016-06.io.spdk:cnode20976", 00:14:18.336 "max_cntlid": 0, 00:14:18.336 "method": "nvmf_create_subsystem", 00:14:18.336 "req_id": 1 00:14:18.336 } 00:14:18.336 Got JSON-RPC error response 00:14:18.336 response: 00:14:18.336 { 00:14:18.336 "code": -32602, 00:14:18.336 "message": "Invalid cntlid range [1-0]" 00:14:18.336 }' 00:14:18.336 21:57:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:18.336 { 00:14:18.336 "nqn": "nqn.2016-06.io.spdk:cnode20976", 00:14:18.336 "max_cntlid": 0, 00:14:18.336 "method": "nvmf_create_subsystem", 00:14:18.336 "req_id": 1 00:14:18.336 } 00:14:18.336 Got JSON-RPC error response 00:14:18.336 response: 00:14:18.336 { 00:14:18.336 "code": -32602, 00:14:18.336 "message": "Invalid cntlid range [1-0]" 00:14:18.336 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:18.336 21:57:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14272 -I 65520 00:14:18.594 [2024-07-13 21:57:37.813992] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14272: invalid cntlid range [1-65520] 00:14:18.594 21:57:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:18.594 { 00:14:18.594 "nqn": "nqn.2016-06.io.spdk:cnode14272", 00:14:18.594 "max_cntlid": 65520, 00:14:18.594 "method": "nvmf_create_subsystem", 00:14:18.594 "req_id": 1 00:14:18.594 } 00:14:18.594 Got JSON-RPC error response 00:14:18.594 response: 00:14:18.594 { 00:14:18.594 "code": -32602, 00:14:18.594 "message": "Invalid cntlid range [1-65520]" 00:14:18.594 }' 00:14:18.594 21:57:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:18.594 { 00:14:18.594 "nqn": "nqn.2016-06.io.spdk:cnode14272", 00:14:18.594 "max_cntlid": 65520, 00:14:18.594 "method": "nvmf_create_subsystem", 00:14:18.594 "req_id": 1 00:14:18.594 } 00:14:18.594 Got JSON-RPC error response 00:14:18.594 response: 00:14:18.594 { 00:14:18.594 "code": -32602, 00:14:18.594 "message": "Invalid cntlid range [1-65520]" 00:14:18.594 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:18.594 21:57:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1233 -i 6 -I 5 00:14:18.852 [2024-07-13 21:57:38.050759] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1233: invalid cntlid range [6-5] 00:14:18.852 21:57:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:18.852 { 00:14:18.852 "nqn": "nqn.2016-06.io.spdk:cnode1233", 00:14:18.852 "min_cntlid": 6, 00:14:18.852 "max_cntlid": 5, 00:14:18.852 "method": "nvmf_create_subsystem", 00:14:18.852 "req_id": 1 00:14:18.852 } 00:14:18.852 Got JSON-RPC error response 00:14:18.852 response: 00:14:18.852 { 00:14:18.852 "code": -32602, 00:14:18.852 "message": "Invalid cntlid range [6-5]" 00:14:18.852 }' 00:14:18.852 21:57:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:18.852 { 00:14:18.852 "nqn": "nqn.2016-06.io.spdk:cnode1233", 00:14:18.852 "min_cntlid": 6, 00:14:18.852 "max_cntlid": 5, 00:14:18.852 "method": "nvmf_create_subsystem", 00:14:18.852 "req_id": 1 00:14:18.852 } 00:14:18.852 Got JSON-RPC error response 00:14:18.852 response: 00:14:18.852 { 00:14:18.852 "code": -32602, 00:14:18.852 "message": "Invalid cntlid range [6-5]" 00:14:18.852 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:18.852 21:57:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:18.852 21:57:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:18.852 { 00:14:18.852 "name": "foobar", 00:14:18.852 "method": "nvmf_delete_target", 00:14:18.852 "req_id": 1 00:14:18.852 } 00:14:18.852 Got JSON-RPC error response 00:14:18.852 response: 00:14:18.852 { 00:14:18.852 "code": -32602, 00:14:18.852 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:18.852 }' 00:14:18.852 21:57:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:18.852 { 00:14:18.852 "name": "foobar", 00:14:18.852 "method": "nvmf_delete_target", 00:14:18.852 "req_id": 1 00:14:18.852 } 00:14:18.852 Got JSON-RPC error response 00:14:18.852 response: 00:14:18.852 { 00:14:18.852 "code": -32602, 00:14:18.852 "message": "The specified target doesn't exist, cannot delete it." 00:14:18.852 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:18.852 21:57:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:18.852 21:57:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:18.852 21:57:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:18.852 21:57:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:14:18.852 21:57:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:18.852 21:57:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:14:18.852 21:57:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:18.852 21:57:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:18.852 rmmod nvme_tcp 00:14:18.852 rmmod nvme_fabrics 00:14:18.852 rmmod nvme_keyring 00:14:18.852 21:57:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:18.852 21:57:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:14:18.852 21:57:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:14:18.852 21:57:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 4014955 ']' 00:14:18.852 21:57:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 4014955 00:14:18.852 21:57:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 4014955 ']' 00:14:18.852 21:57:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 4014955 00:14:18.852 21:57:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:14:18.852 21:57:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:18.852 21:57:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4014955 00:14:19.111 21:57:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:19.111 21:57:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:19.111 21:57:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4014955' 00:14:19.111 killing process with pid 4014955 00:14:19.111 21:57:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 4014955 00:14:19.111 21:57:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 4014955 00:14:20.517 21:57:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:20.517 21:57:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:20.517 21:57:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:20.517 21:57:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:20.517 21:57:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:20.517 21:57:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.517 21:57:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:20.517 21:57:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.420 21:57:41 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:22.420 00:14:22.420 real 0m10.246s 00:14:22.420 user 0m24.376s 00:14:22.420 sys 0m2.612s 00:14:22.420 21:57:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:22.420 21:57:41 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:22.420 ************************************ 00:14:22.420 END TEST nvmf_invalid 00:14:22.420 ************************************ 00:14:22.420 21:57:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:22.420 21:57:41 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:22.420 21:57:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:22.420 21:57:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:22.420 21:57:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:22.420 ************************************ 00:14:22.420 START TEST nvmf_abort 00:14:22.420 ************************************ 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:22.420 * Looking for test storage... 00:14:22.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:14:22.420 21:57:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:24.320 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:24.320 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:24.320 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:24.321 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:24.321 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:24.321 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:24.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:24.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:14:24.579 00:14:24.579 --- 10.0.0.2 ping statistics --- 00:14:24.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.579 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:24.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:24.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:14:24.579 00:14:24.579 --- 10.0.0.1 ping statistics --- 00:14:24.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.579 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=4017724 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 4017724 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 4017724 ']' 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:24.579 21:57:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:24.579 [2024-07-13 21:57:43.921546] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:24.579 [2024-07-13 21:57:43.921702] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.837 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.837 [2024-07-13 21:57:44.061197] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:25.095 [2024-07-13 21:57:44.321495] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.095 [2024-07-13 21:57:44.321566] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.095 [2024-07-13 21:57:44.321599] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.095 [2024-07-13 21:57:44.321620] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.095 [2024-07-13 21:57:44.321641] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.095 [2024-07-13 21:57:44.321772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.095 [2024-07-13 21:57:44.321821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.095 [2024-07-13 21:57:44.321831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:25.660 21:57:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:25.660 21:57:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:14:25.660 21:57:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:25.660 21:57:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:25.660 21:57:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:25.660 21:57:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.660 21:57:44 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:14:25.660 21:57:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.660 21:57:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:25.660 [2024-07-13 21:57:44.888919] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.660 21:57:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.660 21:57:44 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:14:25.660 21:57:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.660 21:57:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:25.660 Malloc0 00:14:25.660 21:57:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.660 21:57:44 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:25.660 21:57:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.660 21:57:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:25.660 Delay0 00:14:25.660 21:57:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.660 21:57:44 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:25.660 21:57:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.660 21:57:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:25.660 21:57:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.660 21:57:44 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:14:25.660 21:57:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.660 21:57:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:25.660 21:57:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.660 21:57:45 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:25.660 21:57:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.660 21:57:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:25.660 [2024-07-13 21:57:45.012862] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.660 21:57:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.660 21:57:45 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:25.660 21:57:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.660 21:57:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:25.661 21:57:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.661 21:57:45 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:14:25.918 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.918 [2024-07-13 21:57:45.180682] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:28.444 Initializing NVMe Controllers 00:14:28.444 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:28.444 controller IO queue size 128 less than required 00:14:28.444 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:14:28.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:14:28.444 Initialization complete. Launching workers. 00:14:28.444 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 126, failed: 25201 00:14:28.444 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 25261, failed to submit 66 00:14:28.444 success 25201, unsuccess 60, failed 0 00:14:28.444 21:57:47 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:28.444 21:57:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.444 21:57:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:28.444 21:57:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.444 21:57:47 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:14:28.444 21:57:47 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:14:28.444 21:57:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:28.444 21:57:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:14:28.444 21:57:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:28.444 21:57:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:14:28.444 21:57:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:28.444 21:57:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:28.444 rmmod nvme_tcp 00:14:28.444 rmmod nvme_fabrics 00:14:28.444 rmmod nvme_keyring 00:14:28.444 21:57:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:28.444 21:57:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:14:28.444 21:57:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:14:28.444 21:57:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 4017724 ']' 00:14:28.444 21:57:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 4017724 00:14:28.444 21:57:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 4017724 ']' 00:14:28.444 21:57:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 4017724 00:14:28.444 21:57:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:14:28.444 21:57:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:28.444 21:57:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4017724 00:14:28.444 21:57:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:28.444 21:57:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:28.444 21:57:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4017724' 00:14:28.444 killing process with pid 4017724 00:14:28.444 21:57:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 4017724 00:14:28.444 21:57:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 4017724 00:14:29.821 21:57:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:29.821 21:57:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:29.821 21:57:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:29.821 21:57:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:29.821 21:57:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:29.821 21:57:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.821 21:57:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.821 21:57:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.725 21:57:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:31.725 00:14:31.725 real 0m9.229s 00:14:31.725 user 0m14.737s 00:14:31.725 sys 0m2.829s 00:14:31.725 21:57:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:31.725 21:57:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:31.725 ************************************ 00:14:31.725 END TEST nvmf_abort 00:14:31.725 ************************************ 00:14:31.725 21:57:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:31.725 21:57:50 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:31.725 21:57:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:31.725 21:57:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:31.725 21:57:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:31.725 ************************************ 00:14:31.725 START TEST nvmf_ns_hotplug_stress 00:14:31.725 ************************************ 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:31.725 * Looking for test storage... 00:14:31.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:31.725 21:57:50 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:33.623 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:33.623 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:33.624 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:33.624 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:33.624 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:33.624 21:57:52 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:33.624 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:33.624 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:33.624 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:33.883 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:33.883 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:33.883 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:33.883 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:33.883 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:33.883 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:33.883 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:33.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:33.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:14:33.883 00:14:33.883 --- 10.0.0.2 ping statistics --- 00:14:33.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.883 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:14:33.883 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:33.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:33.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:14:33.883 00:14:33.883 --- 10.0.0.1 ping statistics --- 00:14:33.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.883 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:14:33.883 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:33.883 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:14:33.883 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:33.883 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:33.883 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:33.883 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:33.883 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:33.883 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:33.883 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:33.883 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:14:33.883 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:33.883 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:33.883 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.883 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=4020206 00:14:33.883 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:33.884 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 4020206 00:14:33.884 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 4020206 ']' 00:14:33.884 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.884 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:33.884 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.884 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:33.884 21:57:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:33.884 [2024-07-13 21:57:53.256299] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:33.884 [2024-07-13 21:57:53.256441] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.141 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.141 [2024-07-13 21:57:53.397862] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:34.398 [2024-07-13 21:57:53.660954] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:34.398 [2024-07-13 21:57:53.661041] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:34.398 [2024-07-13 21:57:53.661076] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:34.398 [2024-07-13 21:57:53.661097] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:34.398 [2024-07-13 21:57:53.661123] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:34.398 [2024-07-13 21:57:53.661265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:34.398 [2024-07-13 21:57:53.661322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.398 [2024-07-13 21:57:53.661341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:34.963 21:57:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:34.963 21:57:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:14:34.963 21:57:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:34.963 21:57:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:34.963 21:57:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:34.963 21:57:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:34.963 21:57:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:14:34.963 21:57:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:35.221 [2024-07-13 21:57:54.444750] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:35.221 21:57:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:35.479 21:57:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:35.736 [2024-07-13 21:57:54.979174] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:35.736 21:57:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:35.994 21:57:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:14:36.252 Malloc0 00:14:36.252 21:57:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:36.511 Delay0 00:14:36.511 21:57:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:36.768 21:57:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:14:37.026 NULL1 00:14:37.026 21:57:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:37.284 21:57:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4020635 00:14:37.284 21:57:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:14:37.284 21:57:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:14:37.284 21:57:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:37.547 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.529 Read completed with error (sct=0, sc=11) 00:14:38.529 21:57:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:38.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:38.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:38.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:38.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:38.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:38.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:39.046 21:57:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:14:39.046 21:57:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:14:39.046 true 00:14:39.046 21:57:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:14:39.046 21:57:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.982 21:57:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:40.239 21:57:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:14:40.239 21:57:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:14:40.497 true 00:14:40.497 21:57:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:14:40.497 21:57:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.755 21:57:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:41.013 21:58:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:14:41.013 21:58:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:14:41.271 true 00:14:41.271 21:58:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:14:41.271 21:58:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.528 21:58:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:41.786 21:58:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:14:41.786 21:58:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:14:41.786 true 00:14:42.044 21:58:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:14:42.044 21:58:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.977 21:58:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:42.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:42.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:42.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:43.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:43.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:43.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:43.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:43.235 21:58:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:14:43.235 21:58:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:14:43.493 true 00:14:43.493 21:58:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:14:43.493 21:58:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.423 21:58:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:44.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:44.680 21:58:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:14:44.680 21:58:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:14:44.680 true 00:14:44.680 21:58:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:14:44.680 21:58:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.938 21:58:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:45.195 21:58:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:14:45.195 21:58:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:14:45.452 true 00:14:45.452 21:58:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:14:45.452 21:58:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:46.823 21:58:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:46.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:46.823 21:58:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:14:46.823 21:58:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:47.080 true 00:14:47.080 21:58:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:14:47.080 21:58:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.338 21:58:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:47.596 21:58:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:14:47.596 21:58:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:47.853 true 00:14:47.853 21:58:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:14:47.853 21:58:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:48.786 21:58:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:48.786 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:49.044 21:58:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:14:49.044 21:58:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:49.301 true 00:14:49.301 21:58:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:14:49.301 21:58:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.558 21:58:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:49.816 21:58:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:14:49.816 21:58:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:49.816 true 00:14:49.816 21:58:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:14:49.816 21:58:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.764 21:58:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:50.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:51.021 21:58:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:14:51.021 21:58:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:51.278 true 00:14:51.278 21:58:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:14:51.278 21:58:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.536 21:58:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:51.793 21:58:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:51.793 21:58:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:52.050 true 00:14:52.050 21:58:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:14:52.050 21:58:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.011 21:58:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:53.011 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:53.269 21:58:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:53.269 21:58:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:53.527 true 00:14:53.527 21:58:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:14:53.527 21:58:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.784 21:58:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:54.042 21:58:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:54.042 21:58:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:54.299 true 00:14:54.299 21:58:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:14:54.299 21:58:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.230 21:58:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:55.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:55.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:55.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:55.488 21:58:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:55.488 21:58:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:55.746 true 00:14:55.746 21:58:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:14:55.746 21:58:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.003 21:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:56.261 21:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:14:56.261 21:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:56.519 true 00:14:56.519 21:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:14:56.519 21:58:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:57.452 21:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:57.710 21:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:57.710 21:58:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:57.968 true 00:14:57.968 21:58:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:14:57.968 21:58:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:58.225 21:58:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:58.483 21:58:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:58.484 21:58:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:58.742 true 00:14:58.742 21:58:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:14:58.742 21:58:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.675 21:58:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:59.675 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:59.675 21:58:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:59.675 21:58:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:59.933 true 00:14:59.933 21:58:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:14:59.933 21:58:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:00.191 21:58:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:00.449 21:58:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:15:00.449 21:58:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:15:00.707 true 00:15:00.707 21:58:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:15:00.707 21:58:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:01.640 21:58:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:01.898 21:58:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:15:01.898 21:58:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:15:01.898 true 00:15:02.156 21:58:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:15:02.156 21:58:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:02.414 21:58:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:02.414 21:58:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:15:02.414 21:58:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:15:02.672 true 00:15:02.672 21:58:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:15:02.672 21:58:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:03.608 21:58:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:03.608 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:03.608 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:03.867 21:58:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:15:03.867 21:58:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:15:04.124 true 00:15:04.124 21:58:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:15:04.124 21:58:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:04.381 21:58:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:04.638 21:58:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:15:04.638 21:58:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:15:04.895 true 00:15:04.895 21:58:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:15:04.895 21:58:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.827 21:58:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:05.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:06.084 21:58:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:15:06.084 21:58:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:15:06.342 true 00:15:06.342 21:58:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:15:06.342 21:58:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:06.600 21:58:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:06.858 21:58:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:15:06.858 21:58:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:15:06.858 true 00:15:07.116 21:58:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:15:07.116 21:58:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:08.099 Initializing NVMe Controllers 00:15:08.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:08.099 Controller IO queue size 128, less than required. 00:15:08.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:08.099 Controller IO queue size 128, less than required. 00:15:08.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:08.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:08.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:08.099 Initialization complete. Launching workers. 00:15:08.099 ======================================================== 00:15:08.099 Latency(us) 00:15:08.099 Device Information : IOPS MiB/s Average min max 00:15:08.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 968.33 0.47 75292.01 2829.81 1129445.48 00:15:08.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8116.39 3.96 15773.14 5996.71 485257.32 00:15:08.099 ======================================================== 00:15:08.099 Total : 9084.72 4.44 22117.18 2829.81 1129445.48 00:15:08.099 00:15:08.099 21:58:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:08.358 21:58:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:15:08.358 21:58:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:15:08.358 true 00:15:08.615 21:58:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4020635 00:15:08.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4020635) - No such process 00:15:08.615 21:58:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 4020635 00:15:08.615 21:58:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:08.872 21:58:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:09.131 21:58:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:15:09.131 21:58:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:15:09.131 21:58:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:15:09.131 21:58:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:09.131 21:58:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:15:09.131 null0 00:15:09.131 21:58:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:09.131 21:58:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:09.131 21:58:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:15:09.388 null1 00:15:09.388 21:58:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:09.388 21:58:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:09.388 21:58:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:15:09.644 null2 00:15:09.644 21:58:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:09.644 21:58:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:09.644 21:58:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:15:09.901 null3 00:15:09.901 21:58:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:09.901 21:58:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:09.902 21:58:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:15:10.159 null4 00:15:10.159 21:58:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:10.159 21:58:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:10.159 21:58:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:15:10.417 null5 00:15:10.417 21:58:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:10.417 21:58:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:10.417 21:58:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:15:10.675 null6 00:15:10.675 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:10.675 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:10.675 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:15:10.933 null7 00:15:10.933 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:10.933 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:10.933 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:15:10.933 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:10.933 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:10.933 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:15:10.933 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:10.933 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:15:10.933 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:10.933 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:10.933 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:10.933 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:10.933 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:10.933 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 4024675 4024676 4024678 4024680 4024682 4024684 4024686 4024688 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:10.934 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:11.192 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.192 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:11.192 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:11.192 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:11.192 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:11.192 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:11.192 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:11.192 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:11.450 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.450 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.450 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:11.450 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.450 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.450 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:11.450 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.450 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.450 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:11.450 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.450 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.450 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:11.450 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.450 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.450 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:11.450 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.450 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.450 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:11.450 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.450 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.450 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:11.450 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.450 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.450 21:58:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:11.707 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:11.707 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:11.708 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:11.708 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:11.708 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:11.708 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:11.708 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:11.708 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:11.965 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.965 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.965 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:11.965 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.965 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.965 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:11.965 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.965 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.965 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:11.965 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.965 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.965 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:11.965 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:11.965 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:11.965 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:12.223 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:12.223 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:12.223 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:12.223 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:12.223 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:12.223 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:12.223 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:12.223 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:12.223 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:12.223 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:12.480 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:12.480 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:12.480 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:12.480 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:12.480 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:12.480 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:12.480 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:12.737 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:12.737 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:12.737 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:12.737 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:12.737 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:12.737 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:12.737 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:12.737 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:12.737 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:12.737 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:12.737 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:12.737 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:12.737 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:12.737 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:12.737 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:12.737 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:12.737 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:12.737 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:12.737 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:12.737 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:12.737 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:12.737 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:12.737 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:12.737 21:58:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:12.994 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:12.994 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:12.994 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:12.994 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:12.994 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:12.994 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:12.994 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:12.994 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:13.274 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.274 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.274 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:13.274 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.274 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.274 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:13.274 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.274 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.274 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:13.274 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.274 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.274 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:13.274 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.274 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.274 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:13.274 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.274 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.274 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:13.274 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.274 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.274 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:13.274 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.274 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.274 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:13.531 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:13.531 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:13.531 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:13.531 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:13.531 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:13.531 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:13.531 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:13.531 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:13.787 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.788 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.788 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:13.788 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.788 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.788 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:13.788 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.788 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.788 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:13.788 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.788 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.788 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:13.788 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.788 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.788 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.788 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.788 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:13.788 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:13.788 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.788 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.788 21:58:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:13.788 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:13.788 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:13.788 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:14.045 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:14.045 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:14.045 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:14.045 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:14.045 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:14.045 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:14.045 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:14.045 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:14.303 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.303 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.303 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:14.303 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.303 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.303 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:14.303 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.303 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.303 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:14.303 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.303 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.303 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.303 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:14.303 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.303 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:14.303 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.303 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.303 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:14.303 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.303 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.303 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:14.303 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.303 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.303 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:14.560 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:14.560 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:14.560 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:14.560 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:14.560 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:14.560 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:14.560 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:14.561 21:58:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:14.818 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.818 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.819 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:14.819 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.819 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.819 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:14.819 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.819 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.819 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.819 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.819 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:14.819 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:14.819 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.819 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.819 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:14.819 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.819 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.819 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:14.819 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.819 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.819 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:14.819 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:14.819 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:14.819 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:15.076 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:15.076 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:15.076 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:15.076 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:15.076 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:15.076 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:15.076 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:15.076 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:15.334 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.334 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.334 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:15.334 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.334 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.334 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:15.334 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.334 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.334 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:15.334 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.334 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.334 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.334 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.334 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:15.334 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:15.334 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.334 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.334 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:15.334 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.334 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.334 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:15.334 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.334 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.334 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:15.592 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:15.592 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:15.592 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:15.592 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:15.592 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:15.592 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:15.592 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:15.592 21:58:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:15.851 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.851 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.851 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:15.851 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.851 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.851 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:15.851 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.851 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.851 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:15.851 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.851 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.851 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:15.851 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.851 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.851 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:15.851 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.851 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.851 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:15.851 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.851 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.851 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:15.851 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:15.851 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.851 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:16.109 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:16.109 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:16.109 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:16.109 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:16.109 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:16.109 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:16.109 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:16.109 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:16.368 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.368 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.368 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.368 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.368 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.368 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.368 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.368 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.368 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.368 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.368 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.368 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.368 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.368 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.368 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.368 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.368 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:16.368 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:15:16.368 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:16.368 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:15:16.368 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:16.368 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:15:16.368 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:16.368 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:16.368 rmmod nvme_tcp 00:15:16.368 rmmod nvme_fabrics 00:15:16.626 rmmod nvme_keyring 00:15:16.626 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:16.626 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:15:16.626 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:15:16.626 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 4020206 ']' 00:15:16.626 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 4020206 00:15:16.626 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 4020206 ']' 00:15:16.626 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 4020206 00:15:16.626 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:15:16.626 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:16.626 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4020206 00:15:16.626 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:16.626 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:16.626 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4020206' 00:15:16.626 killing process with pid 4020206 00:15:16.626 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 4020206 00:15:16.626 21:58:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 4020206 00:15:18.000 21:58:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:18.000 21:58:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:18.000 21:58:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:18.000 21:58:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:18.000 21:58:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:18.000 21:58:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.000 21:58:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:18.000 21:58:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.904 21:58:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:19.904 00:15:19.904 real 0m48.342s 00:15:19.904 user 3m35.055s 00:15:19.904 sys 0m16.556s 00:15:19.904 21:58:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:19.904 21:58:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:19.904 ************************************ 00:15:19.904 END TEST nvmf_ns_hotplug_stress 00:15:19.904 ************************************ 00:15:19.904 21:58:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:19.904 21:58:39 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:19.904 21:58:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:19.904 21:58:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:19.904 21:58:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:19.904 ************************************ 00:15:19.904 START TEST nvmf_connect_stress 00:15:19.904 ************************************ 00:15:19.904 21:58:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:20.162 * Looking for test storage... 00:15:20.163 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:15:20.163 21:58:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:22.073 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:22.073 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.073 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:22.074 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:22.074 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:22.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:22.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:15:22.074 00:15:22.074 --- 10.0.0.2 ping statistics --- 00:15:22.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.074 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:22.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:22.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:15:22.074 00:15:22.074 --- 10.0.0.1 ping statistics --- 00:15:22.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.074 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:22.074 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:22.333 21:58:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:22.333 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:22.333 21:58:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:22.333 21:58:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:22.333 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=4027572 00:15:22.333 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 4027572 00:15:22.333 21:58:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 4027572 ']' 00:15:22.333 21:58:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.333 21:58:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:22.333 21:58:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:22.333 21:58:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.333 21:58:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:22.333 21:58:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:22.333 [2024-07-13 21:58:41.570098] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:22.333 [2024-07-13 21:58:41.570253] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.333 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.333 [2024-07-13 21:58:41.703600] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:22.592 [2024-07-13 21:58:41.934044] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.592 [2024-07-13 21:58:41.934129] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.592 [2024-07-13 21:58:41.934172] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:22.592 [2024-07-13 21:58:41.934188] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:22.592 [2024-07-13 21:58:41.934204] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.592 [2024-07-13 21:58:41.934334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.592 [2024-07-13 21:58:41.934360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.592 [2024-07-13 21:58:41.934372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:23.159 21:58:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:23.159 21:58:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:15:23.159 21:58:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:23.159 21:58:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:23.159 21:58:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:23.159 21:58:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:23.159 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:23.159 21:58:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.159 21:58:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:23.159 [2024-07-13 21:58:42.500751] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:23.159 21:58:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.159 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:23.159 21:58:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.159 21:58:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:23.159 21:58:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.159 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:23.159 21:58:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.159 21:58:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:23.159 [2024-07-13 21:58:42.534433] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.159 21:58:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.159 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:23.159 21:58:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.159 21:58:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:23.159 NULL1 00:15:23.159 21:58:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.159 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=4027726 00:15:23.159 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:23.159 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:23.159 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:23.159 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.418 21:58:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:23.418 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.676 21:58:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.676 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:23.676 21:58:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:23.676 21:58:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.676 21:58:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:23.952 21:58:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.952 21:58:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:23.952 21:58:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:23.952 21:58:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.952 21:58:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:24.214 21:58:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.214 21:58:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:24.214 21:58:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:24.214 21:58:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.214 21:58:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:24.780 21:58:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.780 21:58:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:24.780 21:58:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:24.780 21:58:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.780 21:58:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:25.039 21:58:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.039 21:58:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:25.039 21:58:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:25.039 21:58:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.039 21:58:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:25.297 21:58:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.297 21:58:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:25.297 21:58:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:25.297 21:58:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.297 21:58:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:25.556 21:58:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.556 21:58:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:25.556 21:58:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:25.556 21:58:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.556 21:58:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:25.814 21:58:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.814 21:58:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:25.814 21:58:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:25.814 21:58:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.814 21:58:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.381 21:58:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.381 21:58:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:26.381 21:58:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:26.381 21:58:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.381 21:58:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.639 21:58:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.639 21:58:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:26.639 21:58:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:26.639 21:58:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.639 21:58:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.897 21:58:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.897 21:58:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:26.897 21:58:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:26.897 21:58:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.897 21:58:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:27.155 21:58:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.155 21:58:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:27.155 21:58:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:27.155 21:58:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.155 21:58:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:27.722 21:58:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.722 21:58:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:27.722 21:58:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:27.722 21:58:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.722 21:58:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:27.981 21:58:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.981 21:58:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:27.981 21:58:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:27.981 21:58:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.981 21:58:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.239 21:58:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.239 21:58:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:28.239 21:58:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.239 21:58:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.239 21:58:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.497 21:58:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.497 21:58:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:28.497 21:58:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.497 21:58:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.497 21:58:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.756 21:58:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.756 21:58:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:28.756 21:58:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.756 21:58:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.756 21:58:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:29.322 21:58:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.322 21:58:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:29.322 21:58:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.322 21:58:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.322 21:58:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:29.581 21:58:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.581 21:58:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:29.581 21:58:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.581 21:58:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.581 21:58:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:29.839 21:58:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.839 21:58:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:29.839 21:58:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.839 21:58:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.839 21:58:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:30.098 21:58:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.098 21:58:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:30.098 21:58:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:30.098 21:58:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.098 21:58:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:30.356 21:58:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.356 21:58:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:30.356 21:58:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:30.356 21:58:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.356 21:58:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:30.922 21:58:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.922 21:58:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:30.922 21:58:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:30.922 21:58:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.922 21:58:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:31.192 21:58:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.192 21:58:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:31.192 21:58:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:31.192 21:58:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.192 21:58:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:31.453 21:58:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.453 21:58:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:31.453 21:58:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:31.453 21:58:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.453 21:58:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:31.710 21:58:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.710 21:58:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:31.710 21:58:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:31.710 21:58:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.710 21:58:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:31.967 21:58:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.967 21:58:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:31.967 21:58:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:31.967 21:58:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.967 21:58:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.531 21:58:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.531 21:58:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:32.531 21:58:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:32.531 21:58:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.531 21:58:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.789 21:58:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.789 21:58:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:32.789 21:58:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:32.789 21:58:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.789 21:58:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.047 21:58:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.047 21:58:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:33.047 21:58:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.047 21:58:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.047 21:58:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.305 21:58:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.305 21:58:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:33.305 21:58:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.305 21:58:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.305 21:58:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.563 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:33.822 21:58:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.822 21:58:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4027726 00:15:33.822 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (4027726) - No such process 00:15:33.822 21:58:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 4027726 00:15:33.822 21:58:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:33.822 21:58:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:33.822 21:58:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:33.822 21:58:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:33.822 21:58:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:15:33.822 21:58:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:33.822 21:58:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:15:33.822 21:58:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:33.822 21:58:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:33.822 rmmod nvme_tcp 00:15:33.822 rmmod nvme_fabrics 00:15:33.822 rmmod nvme_keyring 00:15:33.822 21:58:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:33.822 21:58:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:15:33.822 21:58:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:15:33.822 21:58:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 4027572 ']' 00:15:33.822 21:58:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 4027572 00:15:33.822 21:58:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 4027572 ']' 00:15:33.822 21:58:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 4027572 00:15:33.822 21:58:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:15:33.822 21:58:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:33.822 21:58:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4027572 00:15:33.822 21:58:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:33.822 21:58:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:33.822 21:58:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4027572' 00:15:33.822 killing process with pid 4027572 00:15:33.822 21:58:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 4027572 00:15:33.822 21:58:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 4027572 00:15:35.230 21:58:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:35.230 21:58:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:35.230 21:58:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:35.230 21:58:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:35.230 21:58:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:35.230 21:58:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.230 21:58:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:35.230 21:58:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.135 21:58:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:37.135 00:15:37.135 real 0m17.071s 00:15:37.135 user 0m42.353s 00:15:37.135 sys 0m5.919s 00:15:37.135 21:58:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:37.135 21:58:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:37.135 ************************************ 00:15:37.135 END TEST nvmf_connect_stress 00:15:37.135 ************************************ 00:15:37.135 21:58:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:37.135 21:58:56 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:37.135 21:58:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:37.135 21:58:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:37.135 21:58:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:37.135 ************************************ 00:15:37.135 START TEST nvmf_fused_ordering 00:15:37.135 ************************************ 00:15:37.135 21:58:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:37.135 * Looking for test storage... 00:15:37.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:37.135 21:58:56 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:37.135 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:37.135 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.135 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.135 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.135 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.135 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.135 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.135 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.135 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.135 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.135 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.135 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:37.135 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:37.135 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.135 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.135 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:37.135 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:37.135 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:37.135 21:58:56 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.135 21:58:56 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.135 21:58:56 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.135 21:58:56 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.135 21:58:56 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.135 21:58:56 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.135 21:58:56 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:37.136 21:58:56 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.136 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:15:37.136 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:37.136 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:37.136 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:37.136 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.136 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.136 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:37.136 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:37.136 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:37.136 21:58:56 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:37.136 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:37.136 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:37.136 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:37.136 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:37.136 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:37.136 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.136 21:58:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:37.136 21:58:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.136 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:37.136 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:37.136 21:58:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:15:37.136 21:58:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:39.664 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:39.664 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:39.664 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:39.664 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:39.664 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:39.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:39.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:15:39.665 00:15:39.665 --- 10.0.0.2 ping statistics --- 00:15:39.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.665 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:39.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:39.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:15:39.665 00:15:39.665 --- 10.0.0.1 ping statistics --- 00:15:39.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:39.665 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=4031123 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 4031123 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 4031123 ']' 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:39.665 21:58:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:39.665 [2024-07-13 21:58:58.684067] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:39.665 [2024-07-13 21:58:58.684224] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.665 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.665 [2024-07-13 21:58:58.821689] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.923 [2024-07-13 21:58:59.080547] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.923 [2024-07-13 21:58:59.080622] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.923 [2024-07-13 21:58:59.080650] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.923 [2024-07-13 21:58:59.080675] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.923 [2024-07-13 21:58:59.080696] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.923 [2024-07-13 21:58:59.080751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.490 21:58:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:40.490 21:58:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:15:40.490 21:58:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:40.490 21:58:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:40.490 21:58:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:40.490 21:58:59 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.490 21:58:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:40.490 21:58:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.490 21:58:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:40.490 [2024-07-13 21:58:59.610257] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:40.490 21:58:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.491 21:58:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:40.491 21:58:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.491 21:58:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:40.491 21:58:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.491 21:58:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:40.491 21:58:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.491 21:58:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:40.491 [2024-07-13 21:58:59.626443] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:40.491 21:58:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.491 21:58:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:40.491 21:58:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.491 21:58:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:40.491 NULL1 00:15:40.491 21:58:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.491 21:58:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:40.491 21:58:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.491 21:58:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:40.491 21:58:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.491 21:58:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:40.491 21:58:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.491 21:58:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:40.491 21:58:59 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.491 21:58:59 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:40.491 [2024-07-13 21:58:59.696834] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:40.491 [2024-07-13 21:58:59.697005] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4031273 ] 00:15:40.491 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.428 Attached to nqn.2016-06.io.spdk:cnode1 00:15:41.428 Namespace ID: 1 size: 1GB 00:15:41.428 fused_ordering(0) 00:15:41.428 fused_ordering(1) 00:15:41.428 fused_ordering(2) 00:15:41.428 fused_ordering(3) 00:15:41.428 fused_ordering(4) 00:15:41.428 fused_ordering(5) 00:15:41.428 fused_ordering(6) 00:15:41.428 fused_ordering(7) 00:15:41.428 fused_ordering(8) 00:15:41.428 fused_ordering(9) 00:15:41.428 fused_ordering(10) 00:15:41.428 fused_ordering(11) 00:15:41.428 fused_ordering(12) 00:15:41.428 fused_ordering(13) 00:15:41.428 fused_ordering(14) 00:15:41.428 fused_ordering(15) 00:15:41.428 fused_ordering(16) 00:15:41.428 fused_ordering(17) 00:15:41.428 fused_ordering(18) 00:15:41.428 fused_ordering(19) 00:15:41.428 fused_ordering(20) 00:15:41.428 fused_ordering(21) 00:15:41.428 fused_ordering(22) 00:15:41.428 fused_ordering(23) 00:15:41.428 fused_ordering(24) 00:15:41.428 fused_ordering(25) 00:15:41.428 fused_ordering(26) 00:15:41.428 fused_ordering(27) 00:15:41.428 fused_ordering(28) 00:15:41.428 fused_ordering(29) 00:15:41.428 fused_ordering(30) 00:15:41.428 fused_ordering(31) 00:15:41.428 fused_ordering(32) 00:15:41.428 fused_ordering(33) 00:15:41.428 fused_ordering(34) 00:15:41.428 fused_ordering(35) 00:15:41.428 fused_ordering(36) 00:15:41.428 fused_ordering(37) 00:15:41.428 fused_ordering(38) 00:15:41.428 fused_ordering(39) 00:15:41.428 fused_ordering(40) 00:15:41.428 fused_ordering(41) 00:15:41.428 fused_ordering(42) 00:15:41.428 fused_ordering(43) 00:15:41.428 fused_ordering(44) 00:15:41.428 fused_ordering(45) 00:15:41.428 fused_ordering(46) 00:15:41.428 fused_ordering(47) 00:15:41.428 fused_ordering(48) 00:15:41.428 fused_ordering(49) 00:15:41.428 fused_ordering(50) 00:15:41.428 fused_ordering(51) 00:15:41.428 fused_ordering(52) 00:15:41.428 fused_ordering(53) 00:15:41.428 fused_ordering(54) 00:15:41.428 fused_ordering(55) 00:15:41.428 fused_ordering(56) 00:15:41.428 fused_ordering(57) 00:15:41.428 fused_ordering(58) 00:15:41.428 fused_ordering(59) 00:15:41.428 fused_ordering(60) 00:15:41.428 fused_ordering(61) 00:15:41.428 fused_ordering(62) 00:15:41.428 fused_ordering(63) 00:15:41.428 fused_ordering(64) 00:15:41.428 fused_ordering(65) 00:15:41.428 fused_ordering(66) 00:15:41.428 fused_ordering(67) 00:15:41.428 fused_ordering(68) 00:15:41.428 fused_ordering(69) 00:15:41.428 fused_ordering(70) 00:15:41.428 fused_ordering(71) 00:15:41.428 fused_ordering(72) 00:15:41.428 fused_ordering(73) 00:15:41.428 fused_ordering(74) 00:15:41.428 fused_ordering(75) 00:15:41.428 fused_ordering(76) 00:15:41.428 fused_ordering(77) 00:15:41.428 fused_ordering(78) 00:15:41.428 fused_ordering(79) 00:15:41.428 fused_ordering(80) 00:15:41.428 fused_ordering(81) 00:15:41.428 fused_ordering(82) 00:15:41.428 fused_ordering(83) 00:15:41.428 fused_ordering(84) 00:15:41.428 fused_ordering(85) 00:15:41.428 fused_ordering(86) 00:15:41.428 fused_ordering(87) 00:15:41.428 fused_ordering(88) 00:15:41.428 fused_ordering(89) 00:15:41.428 fused_ordering(90) 00:15:41.428 fused_ordering(91) 00:15:41.428 fused_ordering(92) 00:15:41.428 fused_ordering(93) 00:15:41.428 fused_ordering(94) 00:15:41.428 fused_ordering(95) 00:15:41.428 fused_ordering(96) 00:15:41.428 fused_ordering(97) 00:15:41.428 fused_ordering(98) 00:15:41.428 fused_ordering(99) 00:15:41.428 fused_ordering(100) 00:15:41.428 fused_ordering(101) 00:15:41.428 fused_ordering(102) 00:15:41.428 fused_ordering(103) 00:15:41.428 fused_ordering(104) 00:15:41.428 fused_ordering(105) 00:15:41.428 fused_ordering(106) 00:15:41.428 fused_ordering(107) 00:15:41.428 fused_ordering(108) 00:15:41.428 fused_ordering(109) 00:15:41.428 fused_ordering(110) 00:15:41.428 fused_ordering(111) 00:15:41.428 fused_ordering(112) 00:15:41.428 fused_ordering(113) 00:15:41.428 fused_ordering(114) 00:15:41.428 fused_ordering(115) 00:15:41.428 fused_ordering(116) 00:15:41.428 fused_ordering(117) 00:15:41.428 fused_ordering(118) 00:15:41.428 fused_ordering(119) 00:15:41.428 fused_ordering(120) 00:15:41.428 fused_ordering(121) 00:15:41.428 fused_ordering(122) 00:15:41.428 fused_ordering(123) 00:15:41.428 fused_ordering(124) 00:15:41.428 fused_ordering(125) 00:15:41.428 fused_ordering(126) 00:15:41.428 fused_ordering(127) 00:15:41.428 fused_ordering(128) 00:15:41.428 fused_ordering(129) 00:15:41.428 fused_ordering(130) 00:15:41.428 fused_ordering(131) 00:15:41.428 fused_ordering(132) 00:15:41.428 fused_ordering(133) 00:15:41.428 fused_ordering(134) 00:15:41.428 fused_ordering(135) 00:15:41.428 fused_ordering(136) 00:15:41.428 fused_ordering(137) 00:15:41.428 fused_ordering(138) 00:15:41.428 fused_ordering(139) 00:15:41.429 fused_ordering(140) 00:15:41.429 fused_ordering(141) 00:15:41.429 fused_ordering(142) 00:15:41.429 fused_ordering(143) 00:15:41.429 fused_ordering(144) 00:15:41.429 fused_ordering(145) 00:15:41.429 fused_ordering(146) 00:15:41.429 fused_ordering(147) 00:15:41.429 fused_ordering(148) 00:15:41.429 fused_ordering(149) 00:15:41.429 fused_ordering(150) 00:15:41.429 fused_ordering(151) 00:15:41.429 fused_ordering(152) 00:15:41.429 fused_ordering(153) 00:15:41.429 fused_ordering(154) 00:15:41.429 fused_ordering(155) 00:15:41.429 fused_ordering(156) 00:15:41.429 fused_ordering(157) 00:15:41.429 fused_ordering(158) 00:15:41.429 fused_ordering(159) 00:15:41.429 fused_ordering(160) 00:15:41.429 fused_ordering(161) 00:15:41.429 fused_ordering(162) 00:15:41.429 fused_ordering(163) 00:15:41.429 fused_ordering(164) 00:15:41.429 fused_ordering(165) 00:15:41.429 fused_ordering(166) 00:15:41.429 fused_ordering(167) 00:15:41.429 fused_ordering(168) 00:15:41.429 fused_ordering(169) 00:15:41.429 fused_ordering(170) 00:15:41.429 fused_ordering(171) 00:15:41.429 fused_ordering(172) 00:15:41.429 fused_ordering(173) 00:15:41.429 fused_ordering(174) 00:15:41.429 fused_ordering(175) 00:15:41.429 fused_ordering(176) 00:15:41.429 fused_ordering(177) 00:15:41.429 fused_ordering(178) 00:15:41.429 fused_ordering(179) 00:15:41.429 fused_ordering(180) 00:15:41.429 fused_ordering(181) 00:15:41.429 fused_ordering(182) 00:15:41.429 fused_ordering(183) 00:15:41.429 fused_ordering(184) 00:15:41.429 fused_ordering(185) 00:15:41.429 fused_ordering(186) 00:15:41.429 fused_ordering(187) 00:15:41.429 fused_ordering(188) 00:15:41.429 fused_ordering(189) 00:15:41.429 fused_ordering(190) 00:15:41.429 fused_ordering(191) 00:15:41.429 fused_ordering(192) 00:15:41.429 fused_ordering(193) 00:15:41.429 fused_ordering(194) 00:15:41.429 fused_ordering(195) 00:15:41.429 fused_ordering(196) 00:15:41.429 fused_ordering(197) 00:15:41.429 fused_ordering(198) 00:15:41.429 fused_ordering(199) 00:15:41.429 fused_ordering(200) 00:15:41.429 fused_ordering(201) 00:15:41.429 fused_ordering(202) 00:15:41.429 fused_ordering(203) 00:15:41.429 fused_ordering(204) 00:15:41.429 fused_ordering(205) 00:15:41.997 fused_ordering(206) 00:15:41.997 fused_ordering(207) 00:15:41.997 fused_ordering(208) 00:15:41.997 fused_ordering(209) 00:15:41.997 fused_ordering(210) 00:15:41.997 fused_ordering(211) 00:15:41.997 fused_ordering(212) 00:15:41.997 fused_ordering(213) 00:15:41.997 fused_ordering(214) 00:15:41.997 fused_ordering(215) 00:15:41.997 fused_ordering(216) 00:15:41.997 fused_ordering(217) 00:15:41.997 fused_ordering(218) 00:15:41.997 fused_ordering(219) 00:15:41.997 fused_ordering(220) 00:15:41.997 fused_ordering(221) 00:15:41.997 fused_ordering(222) 00:15:41.997 fused_ordering(223) 00:15:41.997 fused_ordering(224) 00:15:41.997 fused_ordering(225) 00:15:41.997 fused_ordering(226) 00:15:41.997 fused_ordering(227) 00:15:41.997 fused_ordering(228) 00:15:41.997 fused_ordering(229) 00:15:41.997 fused_ordering(230) 00:15:41.997 fused_ordering(231) 00:15:41.997 fused_ordering(232) 00:15:41.997 fused_ordering(233) 00:15:41.997 fused_ordering(234) 00:15:41.997 fused_ordering(235) 00:15:41.997 fused_ordering(236) 00:15:41.997 fused_ordering(237) 00:15:41.997 fused_ordering(238) 00:15:41.997 fused_ordering(239) 00:15:41.997 fused_ordering(240) 00:15:41.997 fused_ordering(241) 00:15:41.997 fused_ordering(242) 00:15:41.997 fused_ordering(243) 00:15:41.997 fused_ordering(244) 00:15:41.997 fused_ordering(245) 00:15:41.997 fused_ordering(246) 00:15:41.997 fused_ordering(247) 00:15:41.997 fused_ordering(248) 00:15:41.997 fused_ordering(249) 00:15:41.997 fused_ordering(250) 00:15:41.997 fused_ordering(251) 00:15:41.997 fused_ordering(252) 00:15:41.997 fused_ordering(253) 00:15:41.997 fused_ordering(254) 00:15:41.997 fused_ordering(255) 00:15:41.997 fused_ordering(256) 00:15:41.997 fused_ordering(257) 00:15:41.997 fused_ordering(258) 00:15:41.997 fused_ordering(259) 00:15:41.997 fused_ordering(260) 00:15:41.997 fused_ordering(261) 00:15:41.997 fused_ordering(262) 00:15:41.997 fused_ordering(263) 00:15:41.997 fused_ordering(264) 00:15:41.997 fused_ordering(265) 00:15:41.997 fused_ordering(266) 00:15:41.997 fused_ordering(267) 00:15:41.997 fused_ordering(268) 00:15:41.997 fused_ordering(269) 00:15:41.997 fused_ordering(270) 00:15:41.997 fused_ordering(271) 00:15:41.997 fused_ordering(272) 00:15:41.997 fused_ordering(273) 00:15:41.997 fused_ordering(274) 00:15:41.997 fused_ordering(275) 00:15:41.997 fused_ordering(276) 00:15:41.997 fused_ordering(277) 00:15:41.997 fused_ordering(278) 00:15:41.997 fused_ordering(279) 00:15:41.997 fused_ordering(280) 00:15:41.997 fused_ordering(281) 00:15:41.997 fused_ordering(282) 00:15:41.997 fused_ordering(283) 00:15:41.997 fused_ordering(284) 00:15:41.997 fused_ordering(285) 00:15:41.997 fused_ordering(286) 00:15:41.997 fused_ordering(287) 00:15:41.997 fused_ordering(288) 00:15:41.997 fused_ordering(289) 00:15:41.997 fused_ordering(290) 00:15:41.997 fused_ordering(291) 00:15:41.997 fused_ordering(292) 00:15:41.997 fused_ordering(293) 00:15:41.997 fused_ordering(294) 00:15:41.997 fused_ordering(295) 00:15:41.997 fused_ordering(296) 00:15:41.997 fused_ordering(297) 00:15:41.997 fused_ordering(298) 00:15:41.997 fused_ordering(299) 00:15:41.997 fused_ordering(300) 00:15:41.997 fused_ordering(301) 00:15:41.997 fused_ordering(302) 00:15:41.997 fused_ordering(303) 00:15:41.997 fused_ordering(304) 00:15:41.997 fused_ordering(305) 00:15:41.997 fused_ordering(306) 00:15:41.997 fused_ordering(307) 00:15:41.997 fused_ordering(308) 00:15:41.997 fused_ordering(309) 00:15:41.997 fused_ordering(310) 00:15:41.997 fused_ordering(311) 00:15:41.997 fused_ordering(312) 00:15:41.997 fused_ordering(313) 00:15:41.997 fused_ordering(314) 00:15:41.997 fused_ordering(315) 00:15:41.998 fused_ordering(316) 00:15:41.998 fused_ordering(317) 00:15:41.998 fused_ordering(318) 00:15:41.998 fused_ordering(319) 00:15:41.998 fused_ordering(320) 00:15:41.998 fused_ordering(321) 00:15:41.998 fused_ordering(322) 00:15:41.998 fused_ordering(323) 00:15:41.998 fused_ordering(324) 00:15:41.998 fused_ordering(325) 00:15:41.998 fused_ordering(326) 00:15:41.998 fused_ordering(327) 00:15:41.998 fused_ordering(328) 00:15:41.998 fused_ordering(329) 00:15:41.998 fused_ordering(330) 00:15:41.998 fused_ordering(331) 00:15:41.998 fused_ordering(332) 00:15:41.998 fused_ordering(333) 00:15:41.998 fused_ordering(334) 00:15:41.998 fused_ordering(335) 00:15:41.998 fused_ordering(336) 00:15:41.998 fused_ordering(337) 00:15:41.998 fused_ordering(338) 00:15:41.998 fused_ordering(339) 00:15:41.998 fused_ordering(340) 00:15:41.998 fused_ordering(341) 00:15:41.998 fused_ordering(342) 00:15:41.998 fused_ordering(343) 00:15:41.998 fused_ordering(344) 00:15:41.998 fused_ordering(345) 00:15:41.998 fused_ordering(346) 00:15:41.998 fused_ordering(347) 00:15:41.998 fused_ordering(348) 00:15:41.998 fused_ordering(349) 00:15:41.998 fused_ordering(350) 00:15:41.998 fused_ordering(351) 00:15:41.998 fused_ordering(352) 00:15:41.998 fused_ordering(353) 00:15:41.998 fused_ordering(354) 00:15:41.998 fused_ordering(355) 00:15:41.998 fused_ordering(356) 00:15:41.998 fused_ordering(357) 00:15:41.998 fused_ordering(358) 00:15:41.998 fused_ordering(359) 00:15:41.998 fused_ordering(360) 00:15:41.998 fused_ordering(361) 00:15:41.998 fused_ordering(362) 00:15:41.998 fused_ordering(363) 00:15:41.998 fused_ordering(364) 00:15:41.998 fused_ordering(365) 00:15:41.998 fused_ordering(366) 00:15:41.998 fused_ordering(367) 00:15:41.998 fused_ordering(368) 00:15:41.998 fused_ordering(369) 00:15:41.998 fused_ordering(370) 00:15:41.998 fused_ordering(371) 00:15:41.998 fused_ordering(372) 00:15:41.998 fused_ordering(373) 00:15:41.998 fused_ordering(374) 00:15:41.998 fused_ordering(375) 00:15:41.998 fused_ordering(376) 00:15:41.998 fused_ordering(377) 00:15:41.998 fused_ordering(378) 00:15:41.998 fused_ordering(379) 00:15:41.998 fused_ordering(380) 00:15:41.998 fused_ordering(381) 00:15:41.998 fused_ordering(382) 00:15:41.998 fused_ordering(383) 00:15:41.998 fused_ordering(384) 00:15:41.998 fused_ordering(385) 00:15:41.998 fused_ordering(386) 00:15:41.998 fused_ordering(387) 00:15:41.998 fused_ordering(388) 00:15:41.998 fused_ordering(389) 00:15:41.998 fused_ordering(390) 00:15:41.998 fused_ordering(391) 00:15:41.998 fused_ordering(392) 00:15:41.998 fused_ordering(393) 00:15:41.998 fused_ordering(394) 00:15:41.998 fused_ordering(395) 00:15:41.998 fused_ordering(396) 00:15:41.998 fused_ordering(397) 00:15:41.998 fused_ordering(398) 00:15:41.998 fused_ordering(399) 00:15:41.998 fused_ordering(400) 00:15:41.998 fused_ordering(401) 00:15:41.998 fused_ordering(402) 00:15:41.998 fused_ordering(403) 00:15:41.998 fused_ordering(404) 00:15:41.998 fused_ordering(405) 00:15:41.998 fused_ordering(406) 00:15:41.998 fused_ordering(407) 00:15:41.998 fused_ordering(408) 00:15:41.998 fused_ordering(409) 00:15:41.998 fused_ordering(410) 00:15:42.936 fused_ordering(411) 00:15:42.936 fused_ordering(412) 00:15:42.936 fused_ordering(413) 00:15:42.936 fused_ordering(414) 00:15:42.936 fused_ordering(415) 00:15:42.936 fused_ordering(416) 00:15:42.936 fused_ordering(417) 00:15:42.936 fused_ordering(418) 00:15:42.936 fused_ordering(419) 00:15:42.936 fused_ordering(420) 00:15:42.936 fused_ordering(421) 00:15:42.936 fused_ordering(422) 00:15:42.936 fused_ordering(423) 00:15:42.936 fused_ordering(424) 00:15:42.936 fused_ordering(425) 00:15:42.936 fused_ordering(426) 00:15:42.936 fused_ordering(427) 00:15:42.936 fused_ordering(428) 00:15:42.936 fused_ordering(429) 00:15:42.936 fused_ordering(430) 00:15:42.936 fused_ordering(431) 00:15:42.936 fused_ordering(432) 00:15:42.936 fused_ordering(433) 00:15:42.936 fused_ordering(434) 00:15:42.936 fused_ordering(435) 00:15:42.936 fused_ordering(436) 00:15:42.936 fused_ordering(437) 00:15:42.936 fused_ordering(438) 00:15:42.936 fused_ordering(439) 00:15:42.936 fused_ordering(440) 00:15:42.936 fused_ordering(441) 00:15:42.936 fused_ordering(442) 00:15:42.936 fused_ordering(443) 00:15:42.936 fused_ordering(444) 00:15:42.936 fused_ordering(445) 00:15:42.936 fused_ordering(446) 00:15:42.936 fused_ordering(447) 00:15:42.936 fused_ordering(448) 00:15:42.936 fused_ordering(449) 00:15:42.936 fused_ordering(450) 00:15:42.936 fused_ordering(451) 00:15:42.936 fused_ordering(452) 00:15:42.936 fused_ordering(453) 00:15:42.936 fused_ordering(454) 00:15:42.936 fused_ordering(455) 00:15:42.936 fused_ordering(456) 00:15:42.936 fused_ordering(457) 00:15:42.936 fused_ordering(458) 00:15:42.936 fused_ordering(459) 00:15:42.936 fused_ordering(460) 00:15:42.936 fused_ordering(461) 00:15:42.936 fused_ordering(462) 00:15:42.936 fused_ordering(463) 00:15:42.936 fused_ordering(464) 00:15:42.936 fused_ordering(465) 00:15:42.936 fused_ordering(466) 00:15:42.936 fused_ordering(467) 00:15:42.936 fused_ordering(468) 00:15:42.936 fused_ordering(469) 00:15:42.936 fused_ordering(470) 00:15:42.936 fused_ordering(471) 00:15:42.936 fused_ordering(472) 00:15:42.936 fused_ordering(473) 00:15:42.936 fused_ordering(474) 00:15:42.936 fused_ordering(475) 00:15:42.936 fused_ordering(476) 00:15:42.936 fused_ordering(477) 00:15:42.936 fused_ordering(478) 00:15:42.936 fused_ordering(479) 00:15:42.936 fused_ordering(480) 00:15:42.936 fused_ordering(481) 00:15:42.936 fused_ordering(482) 00:15:42.936 fused_ordering(483) 00:15:42.936 fused_ordering(484) 00:15:42.936 fused_ordering(485) 00:15:42.936 fused_ordering(486) 00:15:42.936 fused_ordering(487) 00:15:42.936 fused_ordering(488) 00:15:42.936 fused_ordering(489) 00:15:42.936 fused_ordering(490) 00:15:42.936 fused_ordering(491) 00:15:42.936 fused_ordering(492) 00:15:42.936 fused_ordering(493) 00:15:42.936 fused_ordering(494) 00:15:42.936 fused_ordering(495) 00:15:42.936 fused_ordering(496) 00:15:42.936 fused_ordering(497) 00:15:42.936 fused_ordering(498) 00:15:42.936 fused_ordering(499) 00:15:42.936 fused_ordering(500) 00:15:42.936 fused_ordering(501) 00:15:42.936 fused_ordering(502) 00:15:42.936 fused_ordering(503) 00:15:42.936 fused_ordering(504) 00:15:42.936 fused_ordering(505) 00:15:42.936 fused_ordering(506) 00:15:42.936 fused_ordering(507) 00:15:42.936 fused_ordering(508) 00:15:42.936 fused_ordering(509) 00:15:42.936 fused_ordering(510) 00:15:42.936 fused_ordering(511) 00:15:42.936 fused_ordering(512) 00:15:42.936 fused_ordering(513) 00:15:42.936 fused_ordering(514) 00:15:42.936 fused_ordering(515) 00:15:42.936 fused_ordering(516) 00:15:42.936 fused_ordering(517) 00:15:42.936 fused_ordering(518) 00:15:42.936 fused_ordering(519) 00:15:42.936 fused_ordering(520) 00:15:42.936 fused_ordering(521) 00:15:42.936 fused_ordering(522) 00:15:42.936 fused_ordering(523) 00:15:42.936 fused_ordering(524) 00:15:42.936 fused_ordering(525) 00:15:42.936 fused_ordering(526) 00:15:42.936 fused_ordering(527) 00:15:42.936 fused_ordering(528) 00:15:42.936 fused_ordering(529) 00:15:42.936 fused_ordering(530) 00:15:42.936 fused_ordering(531) 00:15:42.936 fused_ordering(532) 00:15:42.936 fused_ordering(533) 00:15:42.936 fused_ordering(534) 00:15:42.936 fused_ordering(535) 00:15:42.936 fused_ordering(536) 00:15:42.936 fused_ordering(537) 00:15:42.936 fused_ordering(538) 00:15:42.936 fused_ordering(539) 00:15:42.936 fused_ordering(540) 00:15:42.936 fused_ordering(541) 00:15:42.936 fused_ordering(542) 00:15:42.936 fused_ordering(543) 00:15:42.936 fused_ordering(544) 00:15:42.936 fused_ordering(545) 00:15:42.936 fused_ordering(546) 00:15:42.936 fused_ordering(547) 00:15:42.936 fused_ordering(548) 00:15:42.936 fused_ordering(549) 00:15:42.936 fused_ordering(550) 00:15:42.936 fused_ordering(551) 00:15:42.936 fused_ordering(552) 00:15:42.936 fused_ordering(553) 00:15:42.936 fused_ordering(554) 00:15:42.936 fused_ordering(555) 00:15:42.936 fused_ordering(556) 00:15:42.936 fused_ordering(557) 00:15:42.936 fused_ordering(558) 00:15:42.936 fused_ordering(559) 00:15:42.936 fused_ordering(560) 00:15:42.937 fused_ordering(561) 00:15:42.937 fused_ordering(562) 00:15:42.937 fused_ordering(563) 00:15:42.937 fused_ordering(564) 00:15:42.937 fused_ordering(565) 00:15:42.937 fused_ordering(566) 00:15:42.937 fused_ordering(567) 00:15:42.937 fused_ordering(568) 00:15:42.937 fused_ordering(569) 00:15:42.937 fused_ordering(570) 00:15:42.937 fused_ordering(571) 00:15:42.937 fused_ordering(572) 00:15:42.937 fused_ordering(573) 00:15:42.937 fused_ordering(574) 00:15:42.937 fused_ordering(575) 00:15:42.937 fused_ordering(576) 00:15:42.937 fused_ordering(577) 00:15:42.937 fused_ordering(578) 00:15:42.937 fused_ordering(579) 00:15:42.937 fused_ordering(580) 00:15:42.937 fused_ordering(581) 00:15:42.937 fused_ordering(582) 00:15:42.937 fused_ordering(583) 00:15:42.937 fused_ordering(584) 00:15:42.937 fused_ordering(585) 00:15:42.937 fused_ordering(586) 00:15:42.937 fused_ordering(587) 00:15:42.937 fused_ordering(588) 00:15:42.937 fused_ordering(589) 00:15:42.937 fused_ordering(590) 00:15:42.937 fused_ordering(591) 00:15:42.937 fused_ordering(592) 00:15:42.937 fused_ordering(593) 00:15:42.937 fused_ordering(594) 00:15:42.937 fused_ordering(595) 00:15:42.937 fused_ordering(596) 00:15:42.937 fused_ordering(597) 00:15:42.937 fused_ordering(598) 00:15:42.937 fused_ordering(599) 00:15:42.937 fused_ordering(600) 00:15:42.937 fused_ordering(601) 00:15:42.937 fused_ordering(602) 00:15:42.937 fused_ordering(603) 00:15:42.937 fused_ordering(604) 00:15:42.937 fused_ordering(605) 00:15:42.937 fused_ordering(606) 00:15:42.937 fused_ordering(607) 00:15:42.937 fused_ordering(608) 00:15:42.937 fused_ordering(609) 00:15:42.937 fused_ordering(610) 00:15:42.937 fused_ordering(611) 00:15:42.937 fused_ordering(612) 00:15:42.937 fused_ordering(613) 00:15:42.937 fused_ordering(614) 00:15:42.937 fused_ordering(615) 00:15:43.873 fused_ordering(616) 00:15:43.873 fused_ordering(617) 00:15:43.873 fused_ordering(618) 00:15:43.873 fused_ordering(619) 00:15:43.873 fused_ordering(620) 00:15:43.873 fused_ordering(621) 00:15:43.873 fused_ordering(622) 00:15:43.873 fused_ordering(623) 00:15:43.873 fused_ordering(624) 00:15:43.873 fused_ordering(625) 00:15:43.873 fused_ordering(626) 00:15:43.873 fused_ordering(627) 00:15:43.873 fused_ordering(628) 00:15:43.873 fused_ordering(629) 00:15:43.873 fused_ordering(630) 00:15:43.873 fused_ordering(631) 00:15:43.873 fused_ordering(632) 00:15:43.873 fused_ordering(633) 00:15:43.873 fused_ordering(634) 00:15:43.873 fused_ordering(635) 00:15:43.873 fused_ordering(636) 00:15:43.873 fused_ordering(637) 00:15:43.873 fused_ordering(638) 00:15:43.873 fused_ordering(639) 00:15:43.873 fused_ordering(640) 00:15:43.873 fused_ordering(641) 00:15:43.873 fused_ordering(642) 00:15:43.873 fused_ordering(643) 00:15:43.873 fused_ordering(644) 00:15:43.873 fused_ordering(645) 00:15:43.873 fused_ordering(646) 00:15:43.873 fused_ordering(647) 00:15:43.873 fused_ordering(648) 00:15:43.873 fused_ordering(649) 00:15:43.873 fused_ordering(650) 00:15:43.873 fused_ordering(651) 00:15:43.873 fused_ordering(652) 00:15:43.873 fused_ordering(653) 00:15:43.873 fused_ordering(654) 00:15:43.873 fused_ordering(655) 00:15:43.873 fused_ordering(656) 00:15:43.873 fused_ordering(657) 00:15:43.873 fused_ordering(658) 00:15:43.873 fused_ordering(659) 00:15:43.873 fused_ordering(660) 00:15:43.873 fused_ordering(661) 00:15:43.873 fused_ordering(662) 00:15:43.873 fused_ordering(663) 00:15:43.873 fused_ordering(664) 00:15:43.873 fused_ordering(665) 00:15:43.873 fused_ordering(666) 00:15:43.873 fused_ordering(667) 00:15:43.873 fused_ordering(668) 00:15:43.873 fused_ordering(669) 00:15:43.873 fused_ordering(670) 00:15:43.873 fused_ordering(671) 00:15:43.873 fused_ordering(672) 00:15:43.873 fused_ordering(673) 00:15:43.873 fused_ordering(674) 00:15:43.873 fused_ordering(675) 00:15:43.873 fused_ordering(676) 00:15:43.873 fused_ordering(677) 00:15:43.873 fused_ordering(678) 00:15:43.873 fused_ordering(679) 00:15:43.873 fused_ordering(680) 00:15:43.873 fused_ordering(681) 00:15:43.873 fused_ordering(682) 00:15:43.873 fused_ordering(683) 00:15:43.873 fused_ordering(684) 00:15:43.873 fused_ordering(685) 00:15:43.873 fused_ordering(686) 00:15:43.873 fused_ordering(687) 00:15:43.873 fused_ordering(688) 00:15:43.873 fused_ordering(689) 00:15:43.873 fused_ordering(690) 00:15:43.873 fused_ordering(691) 00:15:43.873 fused_ordering(692) 00:15:43.873 fused_ordering(693) 00:15:43.873 fused_ordering(694) 00:15:43.873 fused_ordering(695) 00:15:43.873 fused_ordering(696) 00:15:43.873 fused_ordering(697) 00:15:43.873 fused_ordering(698) 00:15:43.873 fused_ordering(699) 00:15:43.873 fused_ordering(700) 00:15:43.873 fused_ordering(701) 00:15:43.873 fused_ordering(702) 00:15:43.873 fused_ordering(703) 00:15:43.873 fused_ordering(704) 00:15:43.873 fused_ordering(705) 00:15:43.873 fused_ordering(706) 00:15:43.873 fused_ordering(707) 00:15:43.873 fused_ordering(708) 00:15:43.873 fused_ordering(709) 00:15:43.873 fused_ordering(710) 00:15:43.873 fused_ordering(711) 00:15:43.873 fused_ordering(712) 00:15:43.873 fused_ordering(713) 00:15:43.873 fused_ordering(714) 00:15:43.873 fused_ordering(715) 00:15:43.873 fused_ordering(716) 00:15:43.873 fused_ordering(717) 00:15:43.873 fused_ordering(718) 00:15:43.873 fused_ordering(719) 00:15:43.873 fused_ordering(720) 00:15:43.873 fused_ordering(721) 00:15:43.873 fused_ordering(722) 00:15:43.873 fused_ordering(723) 00:15:43.873 fused_ordering(724) 00:15:43.873 fused_ordering(725) 00:15:43.873 fused_ordering(726) 00:15:43.873 fused_ordering(727) 00:15:43.873 fused_ordering(728) 00:15:43.873 fused_ordering(729) 00:15:43.873 fused_ordering(730) 00:15:43.873 fused_ordering(731) 00:15:43.873 fused_ordering(732) 00:15:43.873 fused_ordering(733) 00:15:43.873 fused_ordering(734) 00:15:43.873 fused_ordering(735) 00:15:43.873 fused_ordering(736) 00:15:43.873 fused_ordering(737) 00:15:43.873 fused_ordering(738) 00:15:43.873 fused_ordering(739) 00:15:43.873 fused_ordering(740) 00:15:43.873 fused_ordering(741) 00:15:43.873 fused_ordering(742) 00:15:43.873 fused_ordering(743) 00:15:43.873 fused_ordering(744) 00:15:43.873 fused_ordering(745) 00:15:43.873 fused_ordering(746) 00:15:43.873 fused_ordering(747) 00:15:43.873 fused_ordering(748) 00:15:43.873 fused_ordering(749) 00:15:43.873 fused_ordering(750) 00:15:43.873 fused_ordering(751) 00:15:43.873 fused_ordering(752) 00:15:43.873 fused_ordering(753) 00:15:43.873 fused_ordering(754) 00:15:43.874 fused_ordering(755) 00:15:43.874 fused_ordering(756) 00:15:43.874 fused_ordering(757) 00:15:43.874 fused_ordering(758) 00:15:43.874 fused_ordering(759) 00:15:43.874 fused_ordering(760) 00:15:43.874 fused_ordering(761) 00:15:43.874 fused_ordering(762) 00:15:43.874 fused_ordering(763) 00:15:43.874 fused_ordering(764) 00:15:43.874 fused_ordering(765) 00:15:43.874 fused_ordering(766) 00:15:43.874 fused_ordering(767) 00:15:43.874 fused_ordering(768) 00:15:43.874 fused_ordering(769) 00:15:43.874 fused_ordering(770) 00:15:43.874 fused_ordering(771) 00:15:43.874 fused_ordering(772) 00:15:43.874 fused_ordering(773) 00:15:43.874 fused_ordering(774) 00:15:43.874 fused_ordering(775) 00:15:43.874 fused_ordering(776) 00:15:43.874 fused_ordering(777) 00:15:43.874 fused_ordering(778) 00:15:43.874 fused_ordering(779) 00:15:43.874 fused_ordering(780) 00:15:43.874 fused_ordering(781) 00:15:43.874 fused_ordering(782) 00:15:43.874 fused_ordering(783) 00:15:43.874 fused_ordering(784) 00:15:43.874 fused_ordering(785) 00:15:43.874 fused_ordering(786) 00:15:43.874 fused_ordering(787) 00:15:43.874 fused_ordering(788) 00:15:43.874 fused_ordering(789) 00:15:43.874 fused_ordering(790) 00:15:43.874 fused_ordering(791) 00:15:43.874 fused_ordering(792) 00:15:43.874 fused_ordering(793) 00:15:43.874 fused_ordering(794) 00:15:43.874 fused_ordering(795) 00:15:43.874 fused_ordering(796) 00:15:43.874 fused_ordering(797) 00:15:43.874 fused_ordering(798) 00:15:43.874 fused_ordering(799) 00:15:43.874 fused_ordering(800) 00:15:43.874 fused_ordering(801) 00:15:43.874 fused_ordering(802) 00:15:43.874 fused_ordering(803) 00:15:43.874 fused_ordering(804) 00:15:43.874 fused_ordering(805) 00:15:43.874 fused_ordering(806) 00:15:43.874 fused_ordering(807) 00:15:43.874 fused_ordering(808) 00:15:43.874 fused_ordering(809) 00:15:43.874 fused_ordering(810) 00:15:43.874 fused_ordering(811) 00:15:43.874 fused_ordering(812) 00:15:43.874 fused_ordering(813) 00:15:43.874 fused_ordering(814) 00:15:43.874 fused_ordering(815) 00:15:43.874 fused_ordering(816) 00:15:43.874 fused_ordering(817) 00:15:43.874 fused_ordering(818) 00:15:43.874 fused_ordering(819) 00:15:43.874 fused_ordering(820) 00:15:44.808 fused_ordering(821) 00:15:44.808 fused_ordering(822) 00:15:44.808 fused_ordering(823) 00:15:44.808 fused_ordering(824) 00:15:44.808 fused_ordering(825) 00:15:44.808 fused_ordering(826) 00:15:44.808 fused_ordering(827) 00:15:44.808 fused_ordering(828) 00:15:44.808 fused_ordering(829) 00:15:44.808 fused_ordering(830) 00:15:44.808 fused_ordering(831) 00:15:44.808 fused_ordering(832) 00:15:44.808 fused_ordering(833) 00:15:44.808 fused_ordering(834) 00:15:44.808 fused_ordering(835) 00:15:44.808 fused_ordering(836) 00:15:44.808 fused_ordering(837) 00:15:44.808 fused_ordering(838) 00:15:44.808 fused_ordering(839) 00:15:44.808 fused_ordering(840) 00:15:44.808 fused_ordering(841) 00:15:44.808 fused_ordering(842) 00:15:44.808 fused_ordering(843) 00:15:44.808 fused_ordering(844) 00:15:44.808 fused_ordering(845) 00:15:44.808 fused_ordering(846) 00:15:44.808 fused_ordering(847) 00:15:44.808 fused_ordering(848) 00:15:44.808 fused_ordering(849) 00:15:44.808 fused_ordering(850) 00:15:44.808 fused_ordering(851) 00:15:44.808 fused_ordering(852) 00:15:44.808 fused_ordering(853) 00:15:44.808 fused_ordering(854) 00:15:44.808 fused_ordering(855) 00:15:44.808 fused_ordering(856) 00:15:44.808 fused_ordering(857) 00:15:44.808 fused_ordering(858) 00:15:44.808 fused_ordering(859) 00:15:44.808 fused_ordering(860) 00:15:44.808 fused_ordering(861) 00:15:44.808 fused_ordering(862) 00:15:44.808 fused_ordering(863) 00:15:44.808 fused_ordering(864) 00:15:44.808 fused_ordering(865) 00:15:44.808 fused_ordering(866) 00:15:44.808 fused_ordering(867) 00:15:44.808 fused_ordering(868) 00:15:44.808 fused_ordering(869) 00:15:44.808 fused_ordering(870) 00:15:44.808 fused_ordering(871) 00:15:44.808 fused_ordering(872) 00:15:44.808 fused_ordering(873) 00:15:44.808 fused_ordering(874) 00:15:44.808 fused_ordering(875) 00:15:44.808 fused_ordering(876) 00:15:44.808 fused_ordering(877) 00:15:44.808 fused_ordering(878) 00:15:44.808 fused_ordering(879) 00:15:44.808 fused_ordering(880) 00:15:44.808 fused_ordering(881) 00:15:44.808 fused_ordering(882) 00:15:44.808 fused_ordering(883) 00:15:44.808 fused_ordering(884) 00:15:44.808 fused_ordering(885) 00:15:44.808 fused_ordering(886) 00:15:44.808 fused_ordering(887) 00:15:44.808 fused_ordering(888) 00:15:44.808 fused_ordering(889) 00:15:44.808 fused_ordering(890) 00:15:44.808 fused_ordering(891) 00:15:44.808 fused_ordering(892) 00:15:44.808 fused_ordering(893) 00:15:44.808 fused_ordering(894) 00:15:44.808 fused_ordering(895) 00:15:44.808 fused_ordering(896) 00:15:44.808 fused_ordering(897) 00:15:44.808 fused_ordering(898) 00:15:44.808 fused_ordering(899) 00:15:44.808 fused_ordering(900) 00:15:44.808 fused_ordering(901) 00:15:44.808 fused_ordering(902) 00:15:44.808 fused_ordering(903) 00:15:44.808 fused_ordering(904) 00:15:44.808 fused_ordering(905) 00:15:44.808 fused_ordering(906) 00:15:44.808 fused_ordering(907) 00:15:44.808 fused_ordering(908) 00:15:44.808 fused_ordering(909) 00:15:44.808 fused_ordering(910) 00:15:44.808 fused_ordering(911) 00:15:44.808 fused_ordering(912) 00:15:44.808 fused_ordering(913) 00:15:44.808 fused_ordering(914) 00:15:44.808 fused_ordering(915) 00:15:44.808 fused_ordering(916) 00:15:44.808 fused_ordering(917) 00:15:44.808 fused_ordering(918) 00:15:44.808 fused_ordering(919) 00:15:44.808 fused_ordering(920) 00:15:44.808 fused_ordering(921) 00:15:44.808 fused_ordering(922) 00:15:44.808 fused_ordering(923) 00:15:44.808 fused_ordering(924) 00:15:44.808 fused_ordering(925) 00:15:44.808 fused_ordering(926) 00:15:44.808 fused_ordering(927) 00:15:44.808 fused_ordering(928) 00:15:44.808 fused_ordering(929) 00:15:44.808 fused_ordering(930) 00:15:44.808 fused_ordering(931) 00:15:44.808 fused_ordering(932) 00:15:44.808 fused_ordering(933) 00:15:44.808 fused_ordering(934) 00:15:44.808 fused_ordering(935) 00:15:44.808 fused_ordering(936) 00:15:44.808 fused_ordering(937) 00:15:44.808 fused_ordering(938) 00:15:44.808 fused_ordering(939) 00:15:44.808 fused_ordering(940) 00:15:44.808 fused_ordering(941) 00:15:44.808 fused_ordering(942) 00:15:44.808 fused_ordering(943) 00:15:44.808 fused_ordering(944) 00:15:44.808 fused_ordering(945) 00:15:44.808 fused_ordering(946) 00:15:44.808 fused_ordering(947) 00:15:44.808 fused_ordering(948) 00:15:44.808 fused_ordering(949) 00:15:44.808 fused_ordering(950) 00:15:44.808 fused_ordering(951) 00:15:44.808 fused_ordering(952) 00:15:44.808 fused_ordering(953) 00:15:44.808 fused_ordering(954) 00:15:44.808 fused_ordering(955) 00:15:44.808 fused_ordering(956) 00:15:44.808 fused_ordering(957) 00:15:44.808 fused_ordering(958) 00:15:44.808 fused_ordering(959) 00:15:44.808 fused_ordering(960) 00:15:44.808 fused_ordering(961) 00:15:44.808 fused_ordering(962) 00:15:44.808 fused_ordering(963) 00:15:44.808 fused_ordering(964) 00:15:44.808 fused_ordering(965) 00:15:44.808 fused_ordering(966) 00:15:44.808 fused_ordering(967) 00:15:44.808 fused_ordering(968) 00:15:44.808 fused_ordering(969) 00:15:44.808 fused_ordering(970) 00:15:44.808 fused_ordering(971) 00:15:44.808 fused_ordering(972) 00:15:44.808 fused_ordering(973) 00:15:44.808 fused_ordering(974) 00:15:44.808 fused_ordering(975) 00:15:44.808 fused_ordering(976) 00:15:44.808 fused_ordering(977) 00:15:44.808 fused_ordering(978) 00:15:44.808 fused_ordering(979) 00:15:44.808 fused_ordering(980) 00:15:44.808 fused_ordering(981) 00:15:44.808 fused_ordering(982) 00:15:44.808 fused_ordering(983) 00:15:44.808 fused_ordering(984) 00:15:44.808 fused_ordering(985) 00:15:44.808 fused_ordering(986) 00:15:44.808 fused_ordering(987) 00:15:44.808 fused_ordering(988) 00:15:44.808 fused_ordering(989) 00:15:44.808 fused_ordering(990) 00:15:44.808 fused_ordering(991) 00:15:44.808 fused_ordering(992) 00:15:44.808 fused_ordering(993) 00:15:44.808 fused_ordering(994) 00:15:44.808 fused_ordering(995) 00:15:44.808 fused_ordering(996) 00:15:44.808 fused_ordering(997) 00:15:44.808 fused_ordering(998) 00:15:44.808 fused_ordering(999) 00:15:44.808 fused_ordering(1000) 00:15:44.808 fused_ordering(1001) 00:15:44.808 fused_ordering(1002) 00:15:44.808 fused_ordering(1003) 00:15:44.808 fused_ordering(1004) 00:15:44.808 fused_ordering(1005) 00:15:44.808 fused_ordering(1006) 00:15:44.808 fused_ordering(1007) 00:15:44.808 fused_ordering(1008) 00:15:44.808 fused_ordering(1009) 00:15:44.808 fused_ordering(1010) 00:15:44.808 fused_ordering(1011) 00:15:44.808 fused_ordering(1012) 00:15:44.808 fused_ordering(1013) 00:15:44.808 fused_ordering(1014) 00:15:44.808 fused_ordering(1015) 00:15:44.808 fused_ordering(1016) 00:15:44.808 fused_ordering(1017) 00:15:44.808 fused_ordering(1018) 00:15:44.808 fused_ordering(1019) 00:15:44.808 fused_ordering(1020) 00:15:44.808 fused_ordering(1021) 00:15:44.808 fused_ordering(1022) 00:15:44.808 fused_ordering(1023) 00:15:44.808 21:59:03 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:44.808 21:59:03 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:44.808 21:59:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:44.808 21:59:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:15:44.808 21:59:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:44.808 21:59:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:15:44.808 21:59:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:44.808 21:59:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:44.808 rmmod nvme_tcp 00:15:44.808 rmmod nvme_fabrics 00:15:44.808 rmmod nvme_keyring 00:15:44.808 21:59:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:44.808 21:59:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:15:44.808 21:59:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:15:44.808 21:59:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 4031123 ']' 00:15:44.808 21:59:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 4031123 00:15:44.808 21:59:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 4031123 ']' 00:15:44.808 21:59:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 4031123 00:15:44.808 21:59:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:15:44.808 21:59:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:44.808 21:59:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4031123 00:15:44.808 21:59:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:44.808 21:59:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:44.808 21:59:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4031123' 00:15:44.808 killing process with pid 4031123 00:15:44.808 21:59:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 4031123 00:15:44.808 21:59:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 4031123 00:15:46.183 21:59:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:46.183 21:59:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:46.183 21:59:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:46.183 21:59:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:46.183 21:59:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:46.183 21:59:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.183 21:59:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.183 21:59:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.083 21:59:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:48.083 00:15:48.083 real 0m10.993s 00:15:48.083 user 0m9.785s 00:15:48.083 sys 0m3.918s 00:15:48.083 21:59:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:48.083 21:59:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:48.083 ************************************ 00:15:48.083 END TEST nvmf_fused_ordering 00:15:48.083 ************************************ 00:15:48.083 21:59:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:48.083 21:59:07 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:48.083 21:59:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:48.083 21:59:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:48.083 21:59:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:48.083 ************************************ 00:15:48.083 START TEST nvmf_delete_subsystem 00:15:48.083 ************************************ 00:15:48.083 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:48.341 * Looking for test storage... 00:15:48.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:48.341 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:48.341 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:15:48.341 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.341 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:15:48.342 21:59:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:50.244 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:50.244 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:50.244 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:50.244 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:50.244 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:50.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:50.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:15:50.245 00:15:50.245 --- 10.0.0.2 ping statistics --- 00:15:50.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.245 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:50.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:50.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:15:50.245 00:15:50.245 --- 10.0.0.1 ping statistics --- 00:15:50.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.245 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=4033745 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 4033745 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 4033745 ']' 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:50.245 21:59:09 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:50.503 [2024-07-13 21:59:09.691594] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:50.503 [2024-07-13 21:59:09.691743] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.503 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.503 [2024-07-13 21:59:09.825340] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:50.761 [2024-07-13 21:59:10.068040] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.762 [2024-07-13 21:59:10.068117] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.762 [2024-07-13 21:59:10.068152] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.762 [2024-07-13 21:59:10.068173] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.762 [2024-07-13 21:59:10.068194] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.762 [2024-07-13 21:59:10.068306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.762 [2024-07-13 21:59:10.068314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.328 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:51.328 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:15:51.328 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:51.328 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:51.328 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:51.328 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.328 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:51.328 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.328 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:51.328 [2024-07-13 21:59:10.672685] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.328 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.328 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:51.328 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.328 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:51.328 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.328 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:51.328 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.329 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:51.329 [2024-07-13 21:59:10.690240] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:51.329 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.329 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:51.329 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.329 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:51.329 NULL1 00:15:51.329 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.329 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:51.329 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.329 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:51.329 Delay0 00:15:51.329 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.329 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:51.329 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.329 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:51.329 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.329 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4033897 00:15:51.329 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:51.329 21:59:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:51.594 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.594 [2024-07-13 21:59:10.824636] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:53.529 21:59:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:53.529 21:59:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.529 21:59:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 starting I/O failed: -6 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 starting I/O failed: -6 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 starting I/O failed: -6 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 starting I/O failed: -6 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 starting I/O failed: -6 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 starting I/O failed: -6 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 starting I/O failed: -6 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 starting I/O failed: -6 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 starting I/O failed: -6 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 starting I/O failed: -6 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 starting I/O failed: -6 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 [2024-07-13 21:59:13.019131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020380 is same with the state(5) to be set 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 starting I/O failed: -6 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 starting I/O failed: -6 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 starting I/O failed: -6 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 starting I/O failed: -6 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 starting I/O failed: -6 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 starting I/O failed: -6 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 starting I/O failed: -6 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Write completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 starting I/O failed: -6 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 starting I/O failed: -6 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 Read completed with error (sct=0, sc=8) 00:15:53.788 starting I/O failed: -6 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Write completed with error (sct=0, sc=8) 00:15:53.789 starting I/O failed: -6 00:15:53.789 Write completed with error (sct=0, sc=8) 00:15:53.789 Write completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 starting I/O failed: -6 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 [2024-07-13 21:59:13.020788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016100 is same with the state(5) to be set 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Write completed with error (sct=0, sc=8) 00:15:53.789 Write completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Write completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Write completed with error (sct=0, sc=8) 00:15:53.789 Write completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Write completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Write completed with error (sct=0, sc=8) 00:15:53.789 Write completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Write completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Write completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Write completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Write completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Write completed with error (sct=0, sc=8) 00:15:53.789 Read completed with error (sct=0, sc=8) 00:15:53.789 Write completed with error (sct=0, sc=8) 00:15:54.736 [2024-07-13 21:59:13.965905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015980 is same with the state(5) to be set 00:15:54.736 Read completed with error (sct=0, sc=8) 00:15:54.736 Read completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 [2024-07-13 21:59:14.020933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020600 is same with the state(5) to be set 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 [2024-07-13 21:59:14.021721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000020100 is same with the state(5) to be set 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 [2024-07-13 21:59:14.023260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000015e80 is same with the state(5) to be set 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.737 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:15:54.737 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4033897 00:15:54.737 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 Read completed with error (sct=0, sc=8) 00:15:54.737 Write completed with error (sct=0, sc=8) 00:15:54.737 [2024-07-13 21:59:14.027178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000016380 is same with the state(5) to be set 00:15:54.737 Initializing NVMe Controllers 00:15:54.737 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:54.737 Controller IO queue size 128, less than required. 00:15:54.737 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:54.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:54.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:54.737 Initialization complete. Launching workers. 00:15:54.737 ======================================================== 00:15:54.737 Latency(us) 00:15:54.737 Device Information : IOPS MiB/s Average min max 00:15:54.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.14 0.09 884104.87 815.22 1016940.78 00:15:54.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 169.21 0.08 896237.79 944.93 1015241.82 00:15:54.737 ======================================================== 00:15:54.737 Total : 345.35 0.17 890049.65 815.22 1016940.78 00:15:54.737 00:15:54.737 [2024-07-13 21:59:14.028803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000015980 (9): Bad file descriptor 00:15:54.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4033897 00:15:55.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4033897) - No such process 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4033897 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 4033897 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 4033897 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:55.303 [2024-07-13 21:59:14.546450] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4034425 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4034425 00:15:55.303 21:59:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:55.303 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.303 [2024-07-13 21:59:14.668511] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:55.868 21:59:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:55.868 21:59:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4034425 00:15:55.868 21:59:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:56.432 21:59:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:56.432 21:59:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4034425 00:15:56.432 21:59:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:56.690 21:59:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:56.690 21:59:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4034425 00:15:56.690 21:59:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:57.255 21:59:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:57.255 21:59:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4034425 00:15:57.255 21:59:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:57.820 21:59:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:57.820 21:59:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4034425 00:15:57.820 21:59:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:58.386 21:59:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:58.386 21:59:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4034425 00:15:58.386 21:59:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:58.645 Initializing NVMe Controllers 00:15:58.645 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:58.645 Controller IO queue size 128, less than required. 00:15:58.645 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:58.645 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:58.645 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:58.645 Initialization complete. Launching workers. 00:15:58.645 ======================================================== 00:15:58.645 Latency(us) 00:15:58.645 Device Information : IOPS MiB/s Average min max 00:15:58.645 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005946.49 1000346.47 1044294.93 00:15:58.645 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005358.46 1000307.39 1013843.55 00:15:58.645 ======================================================== 00:15:58.645 Total : 256.00 0.12 1005652.47 1000307.39 1044294.93 00:15:58.645 00:15:58.903 21:59:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:58.903 21:59:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4034425 00:15:58.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4034425) - No such process 00:15:58.903 21:59:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4034425 00:15:58.903 21:59:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:58.903 21:59:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:58.903 21:59:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:58.903 21:59:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:15:58.903 21:59:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:58.903 21:59:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:15:58.903 21:59:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:58.903 21:59:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:58.903 rmmod nvme_tcp 00:15:58.903 rmmod nvme_fabrics 00:15:58.903 rmmod nvme_keyring 00:15:58.903 21:59:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:58.903 21:59:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:15:58.903 21:59:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:15:58.903 21:59:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 4033745 ']' 00:15:58.903 21:59:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 4033745 00:15:58.903 21:59:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 4033745 ']' 00:15:58.903 21:59:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 4033745 00:15:58.903 21:59:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:15:58.903 21:59:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:58.903 21:59:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4033745 00:15:58.903 21:59:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:58.903 21:59:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:58.903 21:59:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4033745' 00:15:58.903 killing process with pid 4033745 00:15:58.903 21:59:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 4033745 00:15:58.903 21:59:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 4033745 00:16:00.279 21:59:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:00.279 21:59:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:00.279 21:59:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:00.279 21:59:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:00.279 21:59:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:00.279 21:59:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.279 21:59:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:00.279 21:59:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.180 21:59:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:02.180 00:16:02.180 real 0m14.016s 00:16:02.180 user 0m30.719s 00:16:02.180 sys 0m3.153s 00:16:02.180 21:59:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:02.180 21:59:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:02.180 ************************************ 00:16:02.180 END TEST nvmf_delete_subsystem 00:16:02.180 ************************************ 00:16:02.180 21:59:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:02.180 21:59:21 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:02.180 21:59:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:02.180 21:59:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:02.180 21:59:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:02.180 ************************************ 00:16:02.180 START TEST nvmf_ns_masking 00:16:02.180 ************************************ 00:16:02.180 21:59:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:02.180 * Looking for test storage... 00:16:02.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:02.180 21:59:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:02.180 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:02.180 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:02.180 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:02.180 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:02.180 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:02.180 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:02.180 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:02.180 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:02.180 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:02.180 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:02.180 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:02.180 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:02.180 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:02.181 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:02.181 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:02.181 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:02.181 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:02.181 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:02.181 21:59:21 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.181 21:59:21 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.181 21:59:21 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.181 21:59:21 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.181 21:59:21 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.181 21:59:21 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.181 21:59:21 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:02.181 21:59:21 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.181 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:16:02.181 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:02.181 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:02.181 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:02.181 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:02.181 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:02.181 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:02.181 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:02.438 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:02.439 21:59:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:02.439 21:59:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:02.439 21:59:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:02.439 21:59:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:02.439 21:59:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=1d4bde9c-7868-4a3d-b9ca-f9c2bbde1a1a 00:16:02.439 21:59:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:02.439 21:59:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=c665cad9-80f9-4fef-b454-10ac71335e3f 00:16:02.439 21:59:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:02.439 21:59:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:02.439 21:59:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:02.439 21:59:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:02.439 21:59:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=96c959c6-cd70-4afb-be76-b233dfdfb97a 00:16:02.439 21:59:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:02.439 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:02.439 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:02.439 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:02.439 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:02.439 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:02.439 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.439 21:59:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.439 21:59:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.439 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:02.439 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:02.439 21:59:21 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:16:02.439 21:59:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:04.341 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:04.341 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:04.341 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:04.341 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:04.342 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:04.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:04.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:16:04.342 00:16:04.342 --- 10.0.0.2 ping statistics --- 00:16:04.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.342 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:04.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:04.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:16:04.342 00:16:04.342 --- 10.0.0.1 ping statistics --- 00:16:04.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.342 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=4036901 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 4036901 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 4036901 ']' 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:04.342 21:59:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:04.342 [2024-07-13 21:59:23.675308] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:04.342 [2024-07-13 21:59:23.675453] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.600 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.600 [2024-07-13 21:59:23.814054] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.859 [2024-07-13 21:59:24.067827] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:04.859 [2024-07-13 21:59:24.067933] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:04.859 [2024-07-13 21:59:24.067962] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:04.859 [2024-07-13 21:59:24.067987] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:04.859 [2024-07-13 21:59:24.068009] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:04.859 [2024-07-13 21:59:24.068062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.425 21:59:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:05.425 21:59:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:16:05.425 21:59:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:05.425 21:59:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:05.425 21:59:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:05.425 21:59:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:05.425 21:59:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:05.682 [2024-07-13 21:59:24.903467] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:05.682 21:59:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:05.682 21:59:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:05.682 21:59:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:05.940 Malloc1 00:16:05.940 21:59:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:06.198 Malloc2 00:16:06.198 21:59:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:06.763 21:59:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:07.028 21:59:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:07.028 [2024-07-13 21:59:26.393706] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:07.028 21:59:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:07.028 21:59:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 96c959c6-cd70-4afb-be76-b233dfdfb97a -a 10.0.0.2 -s 4420 -i 4 00:16:07.326 21:59:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:07.326 21:59:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:07.326 21:59:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:07.326 21:59:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:07.326 21:59:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:09.227 21:59:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:09.227 21:59:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:09.227 21:59:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:09.227 21:59:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:09.227 21:59:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:09.227 21:59:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:09.227 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:09.227 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:09.227 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:09.227 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:09.227 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:09.227 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:09.227 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:09.227 [ 0]:0x1 00:16:09.227 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:09.227 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:09.485 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=23a7425436b64d1d8f2e6d0809059d66 00:16:09.485 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 23a7425436b64d1d8f2e6d0809059d66 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:09.485 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:09.743 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:09.743 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:09.743 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:09.743 [ 0]:0x1 00:16:09.743 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:09.743 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:09.743 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=23a7425436b64d1d8f2e6d0809059d66 00:16:09.743 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 23a7425436b64d1d8f2e6d0809059d66 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:09.743 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:09.743 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:09.743 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:09.743 [ 1]:0x2 00:16:09.743 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:09.743 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:09.743 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f9cd0976fa064684933c1e6adb8f8ca2 00:16:09.743 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f9cd0976fa064684933c1e6adb8f8ca2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:09.743 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:09.743 21:59:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:09.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.743 21:59:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:10.001 21:59:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:10.259 21:59:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:10.259 21:59:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 96c959c6-cd70-4afb-be76-b233dfdfb97a -a 10.0.0.2 -s 4420 -i 4 00:16:10.517 21:59:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:10.517 21:59:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:10.517 21:59:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:10.517 21:59:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:16:10.517 21:59:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:16:10.517 21:59:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:13.048 [ 0]:0x2 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f9cd0976fa064684933c1e6adb8f8ca2 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f9cd0976fa064684933c1e6adb8f8ca2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:13.048 21:59:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:13.048 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:13.048 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:13.048 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:13.048 [ 0]:0x1 00:16:13.048 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:13.048 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:13.048 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=23a7425436b64d1d8f2e6d0809059d66 00:16:13.048 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 23a7425436b64d1d8f2e6d0809059d66 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:13.048 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:13.048 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:13.048 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:13.048 [ 1]:0x2 00:16:13.048 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:13.048 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:13.048 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f9cd0976fa064684933c1e6adb8f8ca2 00:16:13.048 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f9cd0976fa064684933c1e6adb8f8ca2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:13.048 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:13.307 [ 0]:0x2 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f9cd0976fa064684933c1e6adb8f8ca2 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f9cd0976fa064684933c1e6adb8f8ca2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:13.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.307 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:13.873 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:13.873 21:59:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 96c959c6-cd70-4afb-be76-b233dfdfb97a -a 10.0.0.2 -s 4420 -i 4 00:16:13.873 21:59:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:13.873 21:59:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:16:13.873 21:59:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:13.873 21:59:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:13.873 21:59:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:13.873 21:59:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:16.398 [ 0]:0x1 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=23a7425436b64d1d8f2e6d0809059d66 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 23a7425436b64d1d8f2e6d0809059d66 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:16.398 [ 1]:0x2 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f9cd0976fa064684933c1e6adb8f8ca2 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f9cd0976fa064684933c1e6adb8f8ca2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:16.398 [ 0]:0x2 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:16.398 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f9cd0976fa064684933c1e6adb8f8ca2 00:16:16.399 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f9cd0976fa064684933c1e6adb8f8ca2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:16.399 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:16.399 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:16.399 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:16.399 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:16.399 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:16.399 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:16.399 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:16.399 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:16.399 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:16.399 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:16.399 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:16.399 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:16.657 [2024-07-13 21:59:35.928962] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:16.657 request: 00:16:16.657 { 00:16:16.657 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:16.657 "nsid": 2, 00:16:16.657 "host": "nqn.2016-06.io.spdk:host1", 00:16:16.657 "method": "nvmf_ns_remove_host", 00:16:16.657 "req_id": 1 00:16:16.657 } 00:16:16.657 Got JSON-RPC error response 00:16:16.657 response: 00:16:16.657 { 00:16:16.657 "code": -32602, 00:16:16.657 "message": "Invalid parameters" 00:16:16.657 } 00:16:16.657 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:16.657 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:16.657 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:16.657 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:16.657 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:16.657 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:16.657 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:16.657 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:16.657 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:16.657 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:16.657 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:16.657 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:16.657 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:16.657 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:16.657 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:16.657 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:16.657 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:16.657 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:16.657 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:16.657 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:16.657 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:16.657 21:59:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:16.657 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:16.657 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:16.657 21:59:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:16.657 [ 0]:0x2 00:16:16.657 21:59:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:16.657 21:59:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:16.914 21:59:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f9cd0976fa064684933c1e6adb8f8ca2 00:16:16.914 21:59:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f9cd0976fa064684933c1e6adb8f8ca2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:16.914 21:59:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:16.914 21:59:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:16.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.914 21:59:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=4038522 00:16:16.914 21:59:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:16.914 21:59:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:16.914 21:59:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 4038522 /var/tmp/host.sock 00:16:16.914 21:59:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 4038522 ']' 00:16:16.915 21:59:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:16:16.915 21:59:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:16.915 21:59:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:16.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:16.915 21:59:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:16.915 21:59:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:17.172 [2024-07-13 21:59:36.316010] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:17.172 [2024-07-13 21:59:36.316150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4038522 ] 00:16:17.172 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.172 [2024-07-13 21:59:36.448172] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.430 [2024-07-13 21:59:36.706284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.363 21:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:18.363 21:59:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:16:18.363 21:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:18.621 21:59:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:18.878 21:59:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 1d4bde9c-7868-4a3d-b9ca-f9c2bbde1a1a 00:16:18.878 21:59:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:18.878 21:59:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 1D4BDE9C78684A3DB9CAF9C2BBDE1A1A -i 00:16:19.135 21:59:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid c665cad9-80f9-4fef-b454-10ac71335e3f 00:16:19.135 21:59:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:19.135 21:59:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g C665CAD980F94FEFB45410AC71335E3F -i 00:16:19.392 21:59:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:19.650 21:59:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:19.908 21:59:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:19.908 21:59:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:20.166 nvme0n1 00:16:20.166 21:59:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:20.166 21:59:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:20.731 nvme1n2 00:16:20.731 21:59:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:20.731 21:59:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:20.731 21:59:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:20.731 21:59:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:20.731 21:59:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:20.989 21:59:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:20.989 21:59:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:20.989 21:59:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:20.989 21:59:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:21.247 21:59:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 1d4bde9c-7868-4a3d-b9ca-f9c2bbde1a1a == \1\d\4\b\d\e\9\c\-\7\8\6\8\-\4\a\3\d\-\b\9\c\a\-\f\9\c\2\b\b\d\e\1\a\1\a ]] 00:16:21.247 21:59:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:21.247 21:59:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:21.247 21:59:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:21.504 21:59:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ c665cad9-80f9-4fef-b454-10ac71335e3f == \c\6\6\5\c\a\d\9\-\8\0\f\9\-\4\f\e\f\-\b\4\5\4\-\1\0\a\c\7\1\3\3\5\e\3\f ]] 00:16:21.504 21:59:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 4038522 00:16:21.504 21:59:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 4038522 ']' 00:16:21.504 21:59:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 4038522 00:16:21.504 21:59:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:16:21.504 21:59:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:21.504 21:59:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4038522 00:16:21.504 21:59:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:21.504 21:59:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:21.504 21:59:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4038522' 00:16:21.504 killing process with pid 4038522 00:16:21.504 21:59:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 4038522 00:16:21.504 21:59:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 4038522 00:16:24.078 21:59:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:24.078 21:59:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:16:24.078 21:59:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:16:24.078 21:59:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:24.078 21:59:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:16:24.078 21:59:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:24.078 21:59:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:16:24.078 21:59:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:24.078 21:59:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:24.078 rmmod nvme_tcp 00:16:24.078 rmmod nvme_fabrics 00:16:24.078 rmmod nvme_keyring 00:16:24.078 21:59:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:24.336 21:59:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:16:24.336 21:59:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:16:24.336 21:59:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 4036901 ']' 00:16:24.336 21:59:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 4036901 00:16:24.336 21:59:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 4036901 ']' 00:16:24.336 21:59:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 4036901 00:16:24.336 21:59:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:16:24.336 21:59:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:24.336 21:59:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4036901 00:16:24.336 21:59:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:24.336 21:59:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:24.336 21:59:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4036901' 00:16:24.336 killing process with pid 4036901 00:16:24.336 21:59:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 4036901 00:16:24.336 21:59:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 4036901 00:16:26.237 21:59:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:26.237 21:59:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:26.237 21:59:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:26.237 21:59:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:26.237 21:59:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:26.237 21:59:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.237 21:59:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.237 21:59:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.143 21:59:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:28.143 00:16:28.143 real 0m25.763s 00:16:28.143 user 0m34.982s 00:16:28.143 sys 0m4.328s 00:16:28.143 21:59:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:28.143 21:59:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:28.143 ************************************ 00:16:28.143 END TEST nvmf_ns_masking 00:16:28.143 ************************************ 00:16:28.143 21:59:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:28.143 21:59:47 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:16:28.143 21:59:47 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:28.143 21:59:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:28.143 21:59:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:28.143 21:59:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:28.143 ************************************ 00:16:28.143 START TEST nvmf_nvme_cli 00:16:28.143 ************************************ 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:28.143 * Looking for test storage... 00:16:28.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:16:28.143 21:59:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:30.046 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:30.046 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:30.046 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:30.046 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.046 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.047 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:30.047 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:30.047 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:30.047 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:30.047 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:30.047 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:30.047 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.047 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:30.047 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:30.047 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:30.047 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:30.047 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:30.047 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:30.047 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:30.047 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:30.047 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:30.047 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:30.047 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:30.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:16:30.305 00:16:30.305 --- 10.0.0.2 ping statistics --- 00:16:30.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.305 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:16:30.305 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:30.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:16:30.305 00:16:30.305 --- 10.0.0.1 ping statistics --- 00:16:30.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.305 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:16:30.305 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.305 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:16:30.305 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:30.305 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.305 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:30.305 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:30.305 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.305 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:30.305 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:30.305 21:59:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:30.305 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:30.305 21:59:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:30.305 21:59:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:30.305 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=4041425 00:16:30.305 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:30.305 21:59:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 4041425 00:16:30.305 21:59:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 4041425 ']' 00:16:30.305 21:59:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.305 21:59:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:30.305 21:59:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.305 21:59:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:30.305 21:59:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:30.305 [2024-07-13 21:59:49.571268] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:30.305 [2024-07-13 21:59:49.571432] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.305 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.563 [2024-07-13 21:59:49.722567] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:30.820 [2024-07-13 21:59:49.992430] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.820 [2024-07-13 21:59:49.992506] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.820 [2024-07-13 21:59:49.992534] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.820 [2024-07-13 21:59:49.992555] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.820 [2024-07-13 21:59:49.992576] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.820 [2024-07-13 21:59:49.992705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.820 [2024-07-13 21:59:49.992761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:30.820 [2024-07-13 21:59:49.992807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.820 [2024-07-13 21:59:49.992818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:31.385 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:31.385 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:16:31.385 21:59:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:31.385 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:31.385 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:31.385 21:59:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.385 21:59:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:31.385 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.385 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:31.385 [2024-07-13 21:59:50.508635] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:31.385 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.385 21:59:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:31.385 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.385 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:31.385 Malloc0 00:16:31.385 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.385 21:59:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:31.385 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.385 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:31.385 Malloc1 00:16:31.385 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.385 21:59:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:31.385 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.385 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:31.385 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.385 21:59:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:31.385 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.385 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:31.386 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.386 21:59:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:31.386 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.386 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:31.386 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.386 21:59:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:31.386 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.386 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:31.386 [2024-07-13 21:59:50.699509] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.386 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.386 21:59:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:31.386 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.386 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:31.386 21:59:50 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.386 21:59:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:16:31.643 00:16:31.643 Discovery Log Number of Records 2, Generation counter 2 00:16:31.643 =====Discovery Log Entry 0====== 00:16:31.643 trtype: tcp 00:16:31.643 adrfam: ipv4 00:16:31.643 subtype: current discovery subsystem 00:16:31.643 treq: not required 00:16:31.643 portid: 0 00:16:31.643 trsvcid: 4420 00:16:31.643 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:31.643 traddr: 10.0.0.2 00:16:31.643 eflags: explicit discovery connections, duplicate discovery information 00:16:31.643 sectype: none 00:16:31.643 =====Discovery Log Entry 1====== 00:16:31.643 trtype: tcp 00:16:31.643 adrfam: ipv4 00:16:31.643 subtype: nvme subsystem 00:16:31.643 treq: not required 00:16:31.643 portid: 0 00:16:31.643 trsvcid: 4420 00:16:31.643 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:31.643 traddr: 10.0.0.2 00:16:31.643 eflags: none 00:16:31.643 sectype: none 00:16:31.643 21:59:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:31.643 21:59:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:31.643 21:59:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:31.643 21:59:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:31.643 21:59:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:31.643 21:59:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:31.643 21:59:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:31.643 21:59:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:31.643 21:59:50 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:31.644 21:59:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:31.644 21:59:50 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:32.210 21:59:51 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:32.210 21:59:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:16:32.210 21:59:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:32.210 21:59:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:32.210 21:59:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:32.210 21:59:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:16:34.105 21:59:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:34.105 21:59:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:34.105 21:59:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:34.105 21:59:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:34.105 21:59:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:34.105 21:59:53 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:16:34.105 21:59:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:34.105 21:59:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:34.105 21:59:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:34.105 21:59:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:34.361 21:59:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:34.361 21:59:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:34.361 21:59:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:34.361 21:59:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:34.361 21:59:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:34.361 21:59:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:34.361 21:59:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:34.361 21:59:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:34.361 21:59:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:34.361 21:59:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:34.361 21:59:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:34.361 /dev/nvme0n1 ]] 00:16:34.361 21:59:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:34.361 21:59:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:34.361 21:59:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:34.361 21:59:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:34.361 21:59:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:34.361 21:59:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:34.361 21:59:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:34.361 21:59:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:34.361 21:59:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:34.361 21:59:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:34.361 21:59:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:34.361 21:59:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:34.361 21:59:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:34.361 21:59:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:34.362 21:59:53 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:34.362 21:59:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:34.362 21:59:53 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:34.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.925 21:59:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:34.925 21:59:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:16:34.925 21:59:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:34.925 21:59:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:34.925 21:59:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:34.925 21:59:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:34.925 21:59:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:16:34.925 21:59:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:34.926 21:59:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:34.926 21:59:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.926 21:59:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:34.926 21:59:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.926 21:59:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:34.926 21:59:54 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:34.926 21:59:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:34.926 21:59:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:16:34.926 21:59:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:34.926 21:59:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:16:34.926 21:59:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:34.926 21:59:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:34.926 rmmod nvme_tcp 00:16:34.926 rmmod nvme_fabrics 00:16:34.926 rmmod nvme_keyring 00:16:34.926 21:59:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:34.926 21:59:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:16:34.926 21:59:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:16:34.926 21:59:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 4041425 ']' 00:16:34.926 21:59:54 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 4041425 00:16:34.926 21:59:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 4041425 ']' 00:16:34.926 21:59:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 4041425 00:16:34.926 21:59:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:16:34.926 21:59:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:34.926 21:59:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4041425 00:16:34.926 21:59:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:34.926 21:59:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:34.926 21:59:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4041425' 00:16:34.926 killing process with pid 4041425 00:16:34.926 21:59:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 4041425 00:16:34.926 21:59:54 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 4041425 00:16:36.827 21:59:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:36.827 21:59:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:36.827 21:59:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:36.827 21:59:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:36.827 21:59:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:36.827 21:59:55 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.827 21:59:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.827 21:59:55 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.731 21:59:57 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:38.731 00:16:38.731 real 0m10.487s 00:16:38.731 user 0m21.951s 00:16:38.731 sys 0m2.445s 00:16:38.731 21:59:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:38.731 21:59:57 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:38.731 ************************************ 00:16:38.731 END TEST nvmf_nvme_cli 00:16:38.731 ************************************ 00:16:38.731 21:59:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:38.731 21:59:57 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:16:38.731 21:59:57 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:38.731 21:59:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:38.731 21:59:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:38.731 21:59:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:38.731 ************************************ 00:16:38.731 START TEST nvmf_host_management 00:16:38.731 ************************************ 00:16:38.731 21:59:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:38.731 * Looking for test storage... 00:16:38.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:38.731 21:59:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:38.731 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:38.731 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.731 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.731 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.731 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.731 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.731 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.731 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.731 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.731 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.731 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.731 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.731 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.731 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.731 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.731 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:38.731 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:38.731 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:38.731 21:59:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.731 21:59:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.731 21:59:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.731 21:59:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:38.732 21:59:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:40.634 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:40.634 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:40.634 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:40.634 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:40.634 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:40.635 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:40.635 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:40.635 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:40.635 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.635 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:40.635 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:40.635 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:40.635 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:40.635 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:40.635 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:40.635 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:40.635 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.635 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:40.635 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:40.635 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:40.635 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:40.635 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:40.635 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:40.635 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:40.635 21:59:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:40.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:16:40.930 00:16:40.930 --- 10.0.0.2 ping statistics --- 00:16:40.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.930 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:40.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:40.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:16:40.930 00:16:40.930 --- 10.0.0.1 ping statistics --- 00:16:40.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.930 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=4044199 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 4044199 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 4044199 ']' 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:40.930 22:00:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:40.931 [2024-07-13 22:00:00.171523] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:40.931 [2024-07-13 22:00:00.171658] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.931 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.188 [2024-07-13 22:00:00.304483] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:41.188 [2024-07-13 22:00:00.536078] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:41.188 [2024-07-13 22:00:00.536141] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:41.188 [2024-07-13 22:00:00.536169] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:41.188 [2024-07-13 22:00:00.536200] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:41.188 [2024-07-13 22:00:00.536219] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:41.188 [2024-07-13 22:00:00.536347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.188 [2024-07-13 22:00:00.536395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:41.188 [2024-07-13 22:00:00.537898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:41.188 [2024-07-13 22:00:00.537901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.753 22:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:41.753 22:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:41.753 22:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:41.753 22:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:41.753 22:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:41.753 22:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.753 22:00:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:41.753 22:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.753 22:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:41.753 [2024-07-13 22:00:01.119522] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.753 22:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.753 22:00:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:41.753 22:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:41.753 22:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:41.753 22:00:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:41.753 22:00:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:41.753 22:00:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:41.753 22:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.753 22:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:42.012 Malloc0 00:16:42.012 [2024-07-13 22:00:01.232178] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:42.012 22:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.012 22:00:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:42.012 22:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:42.012 22:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:42.012 22:00:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4044438 00:16:42.012 22:00:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4044438 /var/tmp/bdevperf.sock 00:16:42.012 22:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 4044438 ']' 00:16:42.012 22:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:42.012 22:00:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:42.012 22:00:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:42.012 22:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:42.012 22:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:42.012 22:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:42.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:42.012 22:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:42.012 22:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:42.012 22:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:42.012 22:00:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:42.012 22:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:42.012 { 00:16:42.012 "params": { 00:16:42.012 "name": "Nvme$subsystem", 00:16:42.012 "trtype": "$TEST_TRANSPORT", 00:16:42.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.012 "adrfam": "ipv4", 00:16:42.012 "trsvcid": "$NVMF_PORT", 00:16:42.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.012 "hdgst": ${hdgst:-false}, 00:16:42.012 "ddgst": ${ddgst:-false} 00:16:42.012 }, 00:16:42.012 "method": "bdev_nvme_attach_controller" 00:16:42.012 } 00:16:42.012 EOF 00:16:42.012 )") 00:16:42.012 22:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:42.012 22:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:42.012 22:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:42.012 22:00:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:42.012 "params": { 00:16:42.012 "name": "Nvme0", 00:16:42.012 "trtype": "tcp", 00:16:42.012 "traddr": "10.0.0.2", 00:16:42.012 "adrfam": "ipv4", 00:16:42.012 "trsvcid": "4420", 00:16:42.012 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:42.012 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:42.012 "hdgst": false, 00:16:42.012 "ddgst": false 00:16:42.012 }, 00:16:42.012 "method": "bdev_nvme_attach_controller" 00:16:42.012 }' 00:16:42.012 [2024-07-13 22:00:01.346497] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:42.012 [2024-07-13 22:00:01.346639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4044438 ] 00:16:42.269 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.269 [2024-07-13 22:00:01.479670] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.527 [2024-07-13 22:00:01.719102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.092 Running I/O for 10 seconds... 00:16:43.092 22:00:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:43.092 22:00:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:43.092 22:00:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:43.092 22:00:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.092 22:00:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:43.092 22:00:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.092 22:00:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:43.092 22:00:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:43.092 22:00:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:43.092 22:00:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:43.092 22:00:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:43.092 22:00:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:43.092 22:00:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:43.092 22:00:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:43.092 22:00:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:43.092 22:00:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:43.092 22:00:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.092 22:00:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:43.092 22:00:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.092 22:00:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=3 00:16:43.092 22:00:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 3 -ge 100 ']' 00:16:43.092 22:00:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:43.353 22:00:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:43.353 22:00:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:43.354 22:00:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:43.354 22:00:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.354 22:00:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:43.354 22:00:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:43.354 22:00:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.354 22:00:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=387 00:16:43.354 22:00:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 387 -ge 100 ']' 00:16:43.354 22:00:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:43.354 22:00:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:43.354 22:00:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:43.354 22:00:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:43.354 22:00:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.354 22:00:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:43.354 [2024-07-13 22:00:02.669070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.354 [2024-07-13 22:00:02.669154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.354 [2024-07-13 22:00:02.669199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.354 [2024-07-13 22:00:02.669241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.354 [2024-07-13 22:00:02.669270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.354 [2024-07-13 22:00:02.669298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.354 [2024-07-13 22:00:02.669328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.354 [2024-07-13 22:00:02.669360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.354 [2024-07-13 22:00:02.669392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.354 [2024-07-13 22:00:02.669422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.354 [2024-07-13 22:00:02.669454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.354 [2024-07-13 22:00:02.669487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.354 [2024-07-13 22:00:02.669518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.354 [2024-07-13 22:00:02.669550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.354 [2024-07-13 22:00:02.669592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.354 [2024-07-13 22:00:02.669625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.354 [2024-07-13 22:00:02.669655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.355 [2024-07-13 22:00:02.669698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.355 [2024-07-13 22:00:02.669725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.355 [2024-07-13 22:00:02.669755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.355 [2024-07-13 22:00:02.669787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.355 [2024-07-13 22:00:02.669819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.355 [2024-07-13 22:00:02.669850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.355 [2024-07-13 22:00:02.669917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.355 [2024-07-13 22:00:02.669950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.355 [2024-07-13 22:00:02.669982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.355 [2024-07-13 22:00:02.670017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.355 [2024-07-13 22:00:02.670049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.355 [2024-07-13 22:00:02.670080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.355 [2024-07-13 22:00:02.670112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.355 [2024-07-13 22:00:02.670144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.355 [2024-07-13 22:00:02.670188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.355 [2024-07-13 22:00:02.670240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.355 [2024-07-13 22:00:02.670273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.355 [2024-07-13 22:00:02.670305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.355 [2024-07-13 22:00:02.670336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.355 [2024-07-13 22:00:02.670369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.355 [2024-07-13 22:00:02.670399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.355 [2024-07-13 22:00:02.670430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.355 [2024-07-13 22:00:02.670461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.355 [2024-07-13 22:00:02.670497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.355 [2024-07-13 22:00:02.670531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.356 [2024-07-13 22:00:02.670562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.356 [2024-07-13 22:00:02.670593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003480 is same with the state(5) to be set 00:16:43.356 [2024-07-13 22:00:02.670773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.356 [2024-07-13 22:00:02.670829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.356 [2024-07-13 22:00:02.670873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.356 [2024-07-13 22:00:02.670898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.356 [2024-07-13 22:00:02.670928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.356 [2024-07-13 22:00:02.670948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.356 [2024-07-13 22:00:02.670969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.356 [2024-07-13 22:00:02.670988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.356 [2024-07-13 22:00:02.671008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:16:43.356 22:00:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.356 22:00:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:43.356 22:00:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.356 22:00:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:43.356 [2024-07-13 22:00:02.673934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.356 [2024-07-13 22:00:02.673972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.356 [2024-07-13 22:00:02.674011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.356 [2024-07-13 22:00:02.674035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.356 [2024-07-13 22:00:02.674060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.356 [2024-07-13 22:00:02.674081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.356 [2024-07-13 22:00:02.674105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.356 [2024-07-13 22:00:02.674144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.356 [2024-07-13 22:00:02.674180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.356 [2024-07-13 22:00:02.674202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.357 [2024-07-13 22:00:02.674230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.357 [2024-07-13 22:00:02.674253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.357 [2024-07-13 22:00:02.674277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.357 [2024-07-13 22:00:02.674298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.357 [2024-07-13 22:00:02.674321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.357 [2024-07-13 22:00:02.674342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.357 [2024-07-13 22:00:02.674364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.357 [2024-07-13 22:00:02.674386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.357 [2024-07-13 22:00:02.674409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.357 [2024-07-13 22:00:02.674429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.357 [2024-07-13 22:00:02.674452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.357 [2024-07-13 22:00:02.674473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.357 [2024-07-13 22:00:02.674496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.357 [2024-07-13 22:00:02.674517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.357 [2024-07-13 22:00:02.674540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.357 [2024-07-13 22:00:02.674561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.357 [2024-07-13 22:00:02.674584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.357 [2024-07-13 22:00:02.674605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.357 [2024-07-13 22:00:02.674629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.357 [2024-07-13 22:00:02.674650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.357 [2024-07-13 22:00:02.674673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.357 [2024-07-13 22:00:02.674693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.357 [2024-07-13 22:00:02.674716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.358 [2024-07-13 22:00:02.674738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.358 [2024-07-13 22:00:02.674761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.358 [2024-07-13 22:00:02.674787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.358 [2024-07-13 22:00:02.674811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.358 [2024-07-13 22:00:02.674832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.358 [2024-07-13 22:00:02.674855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.358 [2024-07-13 22:00:02.674889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.358 [2024-07-13 22:00:02.674914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.358 [2024-07-13 22:00:02.674947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.358 [2024-07-13 22:00:02.674970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.358 [2024-07-13 22:00:02.674991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.358 [2024-07-13 22:00:02.675015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.358 [2024-07-13 22:00:02.675036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.358 [2024-07-13 22:00:02.675059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.358 [2024-07-13 22:00:02.675080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.358 [2024-07-13 22:00:02.675102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.358 [2024-07-13 22:00:02.675122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.358 [2024-07-13 22:00:02.675145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.358 [2024-07-13 22:00:02.675177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.358 [2024-07-13 22:00:02.675199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.358 [2024-07-13 22:00:02.675220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.358 [2024-07-13 22:00:02.675249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.358 [2024-07-13 22:00:02.675270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.358 [2024-07-13 22:00:02.675293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.358 [2024-07-13 22:00:02.675314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.358 [2024-07-13 22:00:02.675337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.358 [2024-07-13 22:00:02.675358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.359 [2024-07-13 22:00:02.675386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.359 [2024-07-13 22:00:02.675409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.359 [2024-07-13 22:00:02.675432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.359 [2024-07-13 22:00:02.675455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.359 [2024-07-13 22:00:02.675479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.359 [2024-07-13 22:00:02.675500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.359 [2024-07-13 22:00:02.675524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.359 [2024-07-13 22:00:02.675545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.359 [2024-07-13 22:00:02.675568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.359 [2024-07-13 22:00:02.675589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.359 [2024-07-13 22:00:02.675612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.359 [2024-07-13 22:00:02.675633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.359 [2024-07-13 22:00:02.675656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.359 [2024-07-13 22:00:02.675677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.359 [2024-07-13 22:00:02.675700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.359 [2024-07-13 22:00:02.675720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.359 [2024-07-13 22:00:02.675745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.359 [2024-07-13 22:00:02.675767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.359 [2024-07-13 22:00:02.675790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.359 [2024-07-13 22:00:02.675811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.359 [2024-07-13 22:00:02.675834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.359 [2024-07-13 22:00:02.675855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.359 [2024-07-13 22:00:02.675887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.360 [2024-07-13 22:00:02.675920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.360 [2024-07-13 22:00:02.675944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.360 [2024-07-13 22:00:02.675969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.360 [2024-07-13 22:00:02.675993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.360 [2024-07-13 22:00:02.676015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.360 [2024-07-13 22:00:02.676038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.360 [2024-07-13 22:00:02.676058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.360 [2024-07-13 22:00:02.676081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.360 [2024-07-13 22:00:02.676102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.360 [2024-07-13 22:00:02.676125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.360 [2024-07-13 22:00:02.676145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.360 [2024-07-13 22:00:02.676177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.360 [2024-07-13 22:00:02.676213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.360 [2024-07-13 22:00:02.676236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.360 [2024-07-13 22:00:02.676257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.360 [2024-07-13 22:00:02.676279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.360 [2024-07-13 22:00:02.676299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.360 [2024-07-13 22:00:02.676321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.360 [2024-07-13 22:00:02.676341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.360 [2024-07-13 22:00:02.676363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.360 [2024-07-13 22:00:02.676383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.360 [2024-07-13 22:00:02.676405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.360 [2024-07-13 22:00:02.676426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.361 [2024-07-13 22:00:02.676447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.361 [2024-07-13 22:00:02.676468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.361 [2024-07-13 22:00:02.676490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.361 [2024-07-13 22:00:02.676509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.361 [2024-07-13 22:00:02.676535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.361 [2024-07-13 22:00:02.676556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.361 [2024-07-13 22:00:02.676579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.361 [2024-07-13 22:00:02.676599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.361 [2024-07-13 22:00:02.676620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.361 [2024-07-13 22:00:02.676641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.361 [2024-07-13 22:00:02.676662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.361 [2024-07-13 22:00:02.676682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.361 [2024-07-13 22:00:02.676704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.361 [2024-07-13 22:00:02.676724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.361 [2024-07-13 22:00:02.676745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.361 [2024-07-13 22:00:02.676766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.361 [2024-07-13 22:00:02.676787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.361 [2024-07-13 22:00:02.676807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.361 [2024-07-13 22:00:02.676829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.361 [2024-07-13 22:00:02.676871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.361 [2024-07-13 22:00:02.676898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:43.361 [2024-07-13 22:00:02.676929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.362 [2024-07-13 22:00:02.677253] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2c80 was disconnected and freed. reset controller. 00:16:43.362 [2024-07-13 22:00:02.678538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:43.362 task offset: 51584 on job bdev=Nvme0n1 fails 00:16:43.362 00:16:43.362 Latency(us) 00:16:43.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:43.362 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:43.362 Job: Nvme0n1 ended in about 0.37 seconds with error 00:16:43.362 Verification LBA range: start 0x0 length 0x400 00:16:43.362 Nvme0n1 : 0.37 1090.17 68.14 173.13 0.00 49042.59 3810.80 41748.86 00:16:43.362 =================================================================================================================== 00:16:43.362 Total : 1090.17 68.14 173.13 0.00 49042.59 3810.80 41748.86 00:16:43.362 22:00:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.362 22:00:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:43.362 [2024-07-13 22:00:02.683937] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:43.362 [2024-07-13 22:00:02.683988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:16:43.362 [2024-07-13 22:00:02.732376] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:44.313 22:00:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4044438 00:16:44.313 22:00:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:44.313 22:00:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:44.313 22:00:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:44.313 22:00:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:44.313 22:00:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:44.313 22:00:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:44.313 22:00:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:44.313 { 00:16:44.313 "params": { 00:16:44.313 "name": "Nvme$subsystem", 00:16:44.313 "trtype": "$TEST_TRANSPORT", 00:16:44.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:44.313 "adrfam": "ipv4", 00:16:44.313 "trsvcid": "$NVMF_PORT", 00:16:44.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:44.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:44.313 "hdgst": ${hdgst:-false}, 00:16:44.313 "ddgst": ${ddgst:-false} 00:16:44.313 }, 00:16:44.313 "method": "bdev_nvme_attach_controller" 00:16:44.313 } 00:16:44.313 EOF 00:16:44.313 )") 00:16:44.313 22:00:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:44.313 22:00:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:44.313 22:00:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:44.313 22:00:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:44.313 "params": { 00:16:44.313 "name": "Nvme0", 00:16:44.313 "trtype": "tcp", 00:16:44.313 "traddr": "10.0.0.2", 00:16:44.313 "adrfam": "ipv4", 00:16:44.313 "trsvcid": "4420", 00:16:44.313 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:44.313 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:44.313 "hdgst": false, 00:16:44.313 "ddgst": false 00:16:44.313 }, 00:16:44.313 "method": "bdev_nvme_attach_controller" 00:16:44.313 }' 00:16:44.572 [2024-07-13 22:00:03.766930] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:44.572 [2024-07-13 22:00:03.767074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4044751 ] 00:16:44.572 EAL: No free 2048 kB hugepages reported on node 1 00:16:44.572 [2024-07-13 22:00:03.895357] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.832 [2024-07-13 22:00:04.136290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.402 Running I/O for 1 seconds... 00:16:46.337 00:16:46.338 Latency(us) 00:16:46.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.338 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:46.338 Verification LBA range: start 0x0 length 0x400 00:16:46.338 Nvme0n1 : 1.02 1250.57 78.16 0.00 0.00 50329.62 11553.75 43496.49 00:16:46.338 =================================================================================================================== 00:16:46.338 Total : 1250.57 78.16 0.00 0.00 50329.62 11553.75 43496.49 00:16:47.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 4044438 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:16:47.718 22:00:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:47.718 22:00:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:47.718 22:00:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:47.718 22:00:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:47.718 22:00:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:47.718 22:00:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:47.718 22:00:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:47.718 22:00:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:47.718 22:00:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:47.718 22:00:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:47.718 22:00:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:47.718 rmmod nvme_tcp 00:16:47.718 rmmod nvme_fabrics 00:16:47.718 rmmod nvme_keyring 00:16:47.718 22:00:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:47.718 22:00:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:47.718 22:00:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:47.718 22:00:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 4044199 ']' 00:16:47.718 22:00:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 4044199 00:16:47.718 22:00:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 4044199 ']' 00:16:47.718 22:00:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 4044199 00:16:47.718 22:00:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:16:47.718 22:00:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:47.718 22:00:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4044199 00:16:47.718 22:00:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:47.718 22:00:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:47.718 22:00:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4044199' 00:16:47.718 killing process with pid 4044199 00:16:47.718 22:00:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 4044199 00:16:47.718 22:00:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 4044199 00:16:49.092 [2024-07-13 22:00:08.125424] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:49.092 22:00:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:49.092 22:00:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:49.093 22:00:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:49.093 22:00:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:49.093 22:00:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:49.093 22:00:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.093 22:00:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:49.093 22:00:08 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.995 22:00:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:50.995 22:00:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:50.995 00:16:50.995 real 0m12.393s 00:16:50.995 user 0m34.456s 00:16:50.995 sys 0m3.145s 00:16:50.995 22:00:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:50.995 22:00:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:50.995 ************************************ 00:16:50.995 END TEST nvmf_host_management 00:16:50.995 ************************************ 00:16:50.995 22:00:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:50.995 22:00:10 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:50.995 22:00:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:50.995 22:00:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:50.995 22:00:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:50.995 ************************************ 00:16:50.995 START TEST nvmf_lvol 00:16:50.995 ************************************ 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:50.995 * Looking for test storage... 00:16:50.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.995 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:50.996 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:50.996 22:00:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:50.996 22:00:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:53.528 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:53.528 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:53.528 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:53.528 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:53.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:53.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:16:53.528 00:16:53.528 --- 10.0.0.2 ping statistics --- 00:16:53.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.528 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:53.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:53.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:16:53.528 00:16:53.528 --- 10.0.0.1 ping statistics --- 00:16:53.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:53.528 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:53.528 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:53.529 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:53.529 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:53.529 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:53.529 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:53.529 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:53.529 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:53.529 22:00:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:53.529 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:53.529 22:00:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:53.529 22:00:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:53.529 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=4047716 00:16:53.529 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:53.529 22:00:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 4047716 00:16:53.529 22:00:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 4047716 ']' 00:16:53.529 22:00:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.529 22:00:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:53.529 22:00:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.529 22:00:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:53.529 22:00:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:53.529 [2024-07-13 22:00:12.631346] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:53.529 [2024-07-13 22:00:12.631488] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.529 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.529 [2024-07-13 22:00:12.773093] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:53.787 [2024-07-13 22:00:13.033073] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.787 [2024-07-13 22:00:13.033159] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.787 [2024-07-13 22:00:13.033193] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:53.787 [2024-07-13 22:00:13.033214] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:53.787 [2024-07-13 22:00:13.033236] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.787 [2024-07-13 22:00:13.033363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.787 [2024-07-13 22:00:13.033425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.787 [2024-07-13 22:00:13.033434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.362 22:00:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:54.362 22:00:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:16:54.362 22:00:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:54.362 22:00:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:54.362 22:00:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:54.362 22:00:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:54.362 22:00:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:54.620 [2024-07-13 22:00:13.795409] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:54.620 22:00:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:54.878 22:00:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:54.878 22:00:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:55.134 22:00:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:55.134 22:00:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:55.391 22:00:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:55.648 22:00:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a0eada79-805f-4d8d-b2e1-97f60fade2e5 00:16:55.648 22:00:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a0eada79-805f-4d8d-b2e1-97f60fade2e5 lvol 20 00:16:55.906 22:00:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=9a4100c4-a592-4f50-ad29-5df59ab8f7a8 00:16:55.906 22:00:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:56.199 22:00:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9a4100c4-a592-4f50-ad29-5df59ab8f7a8 00:16:56.480 22:00:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:56.738 [2024-07-13 22:00:15.953062] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.738 22:00:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:56.996 22:00:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4048157 00:16:56.996 22:00:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:56.996 22:00:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:56.996 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.931 22:00:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 9a4100c4-a592-4f50-ad29-5df59ab8f7a8 MY_SNAPSHOT 00:16:58.189 22:00:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9fcd1687-832d-4d2c-a2c0-a066d3a0192c 00:16:58.189 22:00:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 9a4100c4-a592-4f50-ad29-5df59ab8f7a8 30 00:16:58.754 22:00:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9fcd1687-832d-4d2c-a2c0-a066d3a0192c MY_CLONE 00:16:59.013 22:00:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1014f847-7b6a-4d17-a948-a5982114e2f9 00:16:59.013 22:00:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1014f847-7b6a-4d17-a948-a5982114e2f9 00:16:59.580 22:00:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4048157 00:17:07.699 Initializing NVMe Controllers 00:17:07.699 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:07.699 Controller IO queue size 128, less than required. 00:17:07.699 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:07.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:07.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:07.699 Initialization complete. Launching workers. 00:17:07.699 ======================================================== 00:17:07.699 Latency(us) 00:17:07.699 Device Information : IOPS MiB/s Average min max 00:17:07.699 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7896.00 30.84 16218.65 374.52 133273.30 00:17:07.699 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8143.00 31.81 15723.87 4187.22 171895.94 00:17:07.699 ======================================================== 00:17:07.699 Total : 16039.00 62.65 15967.45 374.52 171895.94 00:17:07.699 00:17:07.699 22:00:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:07.699 22:00:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9a4100c4-a592-4f50-ad29-5df59ab8f7a8 00:17:07.956 22:00:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a0eada79-805f-4d8d-b2e1-97f60fade2e5 00:17:08.214 22:00:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:08.214 22:00:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:08.214 22:00:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:08.214 22:00:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:08.214 22:00:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:17:08.214 22:00:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:08.214 22:00:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:17:08.214 22:00:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:08.214 22:00:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:08.214 rmmod nvme_tcp 00:17:08.214 rmmod nvme_fabrics 00:17:08.214 rmmod nvme_keyring 00:17:08.214 22:00:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:08.214 22:00:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:17:08.214 22:00:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:17:08.214 22:00:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 4047716 ']' 00:17:08.214 22:00:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 4047716 00:17:08.473 22:00:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 4047716 ']' 00:17:08.473 22:00:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 4047716 00:17:08.473 22:00:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:17:08.473 22:00:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:08.473 22:00:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4047716 00:17:08.473 22:00:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:08.473 22:00:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:08.473 22:00:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4047716' 00:17:08.473 killing process with pid 4047716 00:17:08.473 22:00:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 4047716 00:17:08.473 22:00:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 4047716 00:17:09.853 22:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:09.853 22:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:09.853 22:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:09.853 22:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:09.853 22:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:09.853 22:00:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.853 22:00:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.853 22:00:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:12.423 00:17:12.423 real 0m20.950s 00:17:12.423 user 1m9.469s 00:17:12.423 sys 0m5.482s 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:12.423 ************************************ 00:17:12.423 END TEST nvmf_lvol 00:17:12.423 ************************************ 00:17:12.423 22:00:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:12.423 22:00:31 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:12.423 22:00:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:12.423 22:00:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:12.423 22:00:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:12.423 ************************************ 00:17:12.423 START TEST nvmf_lvs_grow 00:17:12.423 ************************************ 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:12.423 * Looking for test storage... 00:17:12.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:12.423 22:00:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:14.326 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:14.326 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:14.326 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:14.326 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:14.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:14.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:17:14.326 00:17:14.326 --- 10.0.0.2 ping statistics --- 00:17:14.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.326 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:14.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:17:14.326 00:17:14.326 --- 10.0.0.1 ping statistics --- 00:17:14.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.326 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.326 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:17:14.327 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:14.327 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.327 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:14.327 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:14.327 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.327 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:14.327 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:14.327 22:00:33 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:14.327 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:14.327 22:00:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:14.327 22:00:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:14.327 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=4051542 00:17:14.327 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:14.327 22:00:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 4051542 00:17:14.327 22:00:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 4051542 ']' 00:17:14.327 22:00:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.327 22:00:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:14.327 22:00:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.327 22:00:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:14.327 22:00:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:14.327 [2024-07-13 22:00:33.496993] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:14.327 [2024-07-13 22:00:33.497125] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.327 EAL: No free 2048 kB hugepages reported on node 1 00:17:14.327 [2024-07-13 22:00:33.638555] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.586 [2024-07-13 22:00:33.898687] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.586 [2024-07-13 22:00:33.898760] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.586 [2024-07-13 22:00:33.898788] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.586 [2024-07-13 22:00:33.898813] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.586 [2024-07-13 22:00:33.898834] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.586 [2024-07-13 22:00:33.898895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.151 22:00:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:15.151 22:00:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:17:15.151 22:00:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:15.151 22:00:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:15.151 22:00:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:15.151 22:00:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.151 22:00:34 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:15.410 [2024-07-13 22:00:34.657115] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.410 22:00:34 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:15.410 22:00:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:15.410 22:00:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:15.410 22:00:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:15.410 ************************************ 00:17:15.410 START TEST lvs_grow_clean 00:17:15.410 ************************************ 00:17:15.410 22:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:17:15.410 22:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:15.410 22:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:15.410 22:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:15.410 22:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:15.410 22:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:15.410 22:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:15.410 22:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:15.410 22:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:15.410 22:00:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:15.668 22:00:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:15.668 22:00:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:15.927 22:00:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3de9ca3f-5a38-4cd3-be6f-2dd43c7bf77c 00:17:15.927 22:00:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3de9ca3f-5a38-4cd3-be6f-2dd43c7bf77c 00:17:15.927 22:00:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:16.185 22:00:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:16.185 22:00:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:16.185 22:00:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3de9ca3f-5a38-4cd3-be6f-2dd43c7bf77c lvol 150 00:17:16.444 22:00:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=02a013da-3376-4a76-869b-587b94190c60 00:17:16.444 22:00:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:16.444 22:00:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:16.742 [2024-07-13 22:00:36.023558] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:16.742 [2024-07-13 22:00:36.023671] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:16.742 true 00:17:16.742 22:00:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3de9ca3f-5a38-4cd3-be6f-2dd43c7bf77c 00:17:16.742 22:00:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:17.002 22:00:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:17.002 22:00:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:17.272 22:00:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 02a013da-3376-4a76-869b-587b94190c60 00:17:17.537 22:00:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:17.796 [2024-07-13 22:00:37.018804] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:17.796 22:00:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:18.053 22:00:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4052099 00:17:18.054 22:00:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:18.054 22:00:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:18.054 22:00:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4052099 /var/tmp/bdevperf.sock 00:17:18.054 22:00:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 4052099 ']' 00:17:18.054 22:00:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:18.054 22:00:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.054 22:00:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:18.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:18.054 22:00:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.054 22:00:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:18.054 [2024-07-13 22:00:37.362858] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:18.054 [2024-07-13 22:00:37.363025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4052099 ] 00:17:18.054 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.311 [2024-07-13 22:00:37.495433] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.570 [2024-07-13 22:00:37.745977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.137 22:00:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:19.137 22:00:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:17:19.137 22:00:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:19.395 Nvme0n1 00:17:19.395 22:00:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:19.654 [ 00:17:19.654 { 00:17:19.654 "name": "Nvme0n1", 00:17:19.654 "aliases": [ 00:17:19.654 "02a013da-3376-4a76-869b-587b94190c60" 00:17:19.654 ], 00:17:19.654 "product_name": "NVMe disk", 00:17:19.654 "block_size": 4096, 00:17:19.654 "num_blocks": 38912, 00:17:19.654 "uuid": "02a013da-3376-4a76-869b-587b94190c60", 00:17:19.654 "assigned_rate_limits": { 00:17:19.654 "rw_ios_per_sec": 0, 00:17:19.654 "rw_mbytes_per_sec": 0, 00:17:19.654 "r_mbytes_per_sec": 0, 00:17:19.654 "w_mbytes_per_sec": 0 00:17:19.654 }, 00:17:19.654 "claimed": false, 00:17:19.654 "zoned": false, 00:17:19.654 "supported_io_types": { 00:17:19.654 "read": true, 00:17:19.654 "write": true, 00:17:19.654 "unmap": true, 00:17:19.654 "flush": true, 00:17:19.654 "reset": true, 00:17:19.654 "nvme_admin": true, 00:17:19.654 "nvme_io": true, 00:17:19.654 "nvme_io_md": false, 00:17:19.654 "write_zeroes": true, 00:17:19.654 "zcopy": false, 00:17:19.654 "get_zone_info": false, 00:17:19.654 "zone_management": false, 00:17:19.654 "zone_append": false, 00:17:19.654 "compare": true, 00:17:19.654 "compare_and_write": true, 00:17:19.654 "abort": true, 00:17:19.654 "seek_hole": false, 00:17:19.654 "seek_data": false, 00:17:19.654 "copy": true, 00:17:19.654 "nvme_iov_md": false 00:17:19.654 }, 00:17:19.654 "memory_domains": [ 00:17:19.654 { 00:17:19.654 "dma_device_id": "system", 00:17:19.654 "dma_device_type": 1 00:17:19.654 } 00:17:19.654 ], 00:17:19.654 "driver_specific": { 00:17:19.654 "nvme": [ 00:17:19.654 { 00:17:19.654 "trid": { 00:17:19.654 "trtype": "TCP", 00:17:19.654 "adrfam": "IPv4", 00:17:19.654 "traddr": "10.0.0.2", 00:17:19.654 "trsvcid": "4420", 00:17:19.654 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:19.654 }, 00:17:19.654 "ctrlr_data": { 00:17:19.654 "cntlid": 1, 00:17:19.654 "vendor_id": "0x8086", 00:17:19.654 "model_number": "SPDK bdev Controller", 00:17:19.654 "serial_number": "SPDK0", 00:17:19.654 "firmware_revision": "24.09", 00:17:19.654 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:19.654 "oacs": { 00:17:19.654 "security": 0, 00:17:19.654 "format": 0, 00:17:19.654 "firmware": 0, 00:17:19.654 "ns_manage": 0 00:17:19.654 }, 00:17:19.654 "multi_ctrlr": true, 00:17:19.654 "ana_reporting": false 00:17:19.654 }, 00:17:19.654 "vs": { 00:17:19.654 "nvme_version": "1.3" 00:17:19.654 }, 00:17:19.654 "ns_data": { 00:17:19.654 "id": 1, 00:17:19.654 "can_share": true 00:17:19.654 } 00:17:19.654 } 00:17:19.654 ], 00:17:19.654 "mp_policy": "active_passive" 00:17:19.654 } 00:17:19.654 } 00:17:19.654 ] 00:17:19.654 22:00:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4052250 00:17:19.654 22:00:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:19.654 22:00:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:19.912 Running I/O for 10 seconds... 00:17:20.850 Latency(us) 00:17:20.850 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.850 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.850 Nvme0n1 : 1.00 10596.00 41.39 0.00 0.00 0.00 0.00 0.00 00:17:20.850 =================================================================================================================== 00:17:20.850 Total : 10596.00 41.39 0.00 0.00 0.00 0.00 0.00 00:17:20.850 00:17:21.785 22:00:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3de9ca3f-5a38-4cd3-be6f-2dd43c7bf77c 00:17:21.785 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.785 Nvme0n1 : 2.00 11067.50 43.23 0.00 0.00 0.00 0.00 0.00 00:17:21.786 =================================================================================================================== 00:17:21.786 Total : 11067.50 43.23 0.00 0.00 0.00 0.00 0.00 00:17:21.786 00:17:22.044 true 00:17:22.044 22:00:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3de9ca3f-5a38-4cd3-be6f-2dd43c7bf77c 00:17:22.044 22:00:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:22.302 22:00:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:22.302 22:00:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:22.302 22:00:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4052250 00:17:22.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:22.868 Nvme0n1 : 3.00 11034.67 43.10 0.00 0.00 0.00 0.00 0.00 00:17:22.869 =================================================================================================================== 00:17:22.869 Total : 11034.67 43.10 0.00 0.00 0.00 0.00 0.00 00:17:22.869 00:17:23.819 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:23.819 Nvme0n1 : 4.00 11051.25 43.17 0.00 0.00 0.00 0.00 0.00 00:17:23.819 =================================================================================================================== 00:17:23.819 Total : 11051.25 43.17 0.00 0.00 0.00 0.00 0.00 00:17:23.819 00:17:24.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.760 Nvme0n1 : 5.00 11101.40 43.36 0.00 0.00 0.00 0.00 0.00 00:17:24.760 =================================================================================================================== 00:17:24.760 Total : 11101.40 43.36 0.00 0.00 0.00 0.00 0.00 00:17:24.760 00:17:26.135 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:26.135 Nvme0n1 : 6.00 11113.50 43.41 0.00 0.00 0.00 0.00 0.00 00:17:26.135 =================================================================================================================== 00:17:26.135 Total : 11113.50 43.41 0.00 0.00 0.00 0.00 0.00 00:17:26.135 00:17:27.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.072 Nvme0n1 : 7.00 11151.57 43.56 0.00 0.00 0.00 0.00 0.00 00:17:27.072 =================================================================================================================== 00:17:27.072 Total : 11151.57 43.56 0.00 0.00 0.00 0.00 0.00 00:17:27.072 00:17:28.008 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:28.008 Nvme0n1 : 8.00 11168.75 43.63 0.00 0.00 0.00 0.00 0.00 00:17:28.008 =================================================================================================================== 00:17:28.008 Total : 11168.75 43.63 0.00 0.00 0.00 0.00 0.00 00:17:28.008 00:17:28.942 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:28.942 Nvme0n1 : 9.00 11170.44 43.63 0.00 0.00 0.00 0.00 0.00 00:17:28.942 =================================================================================================================== 00:17:28.942 Total : 11170.44 43.63 0.00 0.00 0.00 0.00 0.00 00:17:28.942 00:17:29.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:29.878 Nvme0n1 : 10.00 11175.50 43.65 0.00 0.00 0.00 0.00 0.00 00:17:29.878 =================================================================================================================== 00:17:29.878 Total : 11175.50 43.65 0.00 0.00 0.00 0.00 0.00 00:17:29.878 00:17:29.878 00:17:29.878 Latency(us) 00:17:29.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:29.878 Nvme0n1 : 10.01 11179.95 43.67 0.00 0.00 11441.55 5922.51 30680.56 00:17:29.878 =================================================================================================================== 00:17:29.878 Total : 11179.95 43.67 0.00 0.00 11441.55 5922.51 30680.56 00:17:29.878 0 00:17:29.878 22:00:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4052099 00:17:29.878 22:00:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 4052099 ']' 00:17:29.878 22:00:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 4052099 00:17:29.878 22:00:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:17:29.878 22:00:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:29.878 22:00:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4052099 00:17:29.878 22:00:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:29.878 22:00:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:29.878 22:00:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4052099' 00:17:29.878 killing process with pid 4052099 00:17:29.878 22:00:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 4052099 00:17:29.878 Received shutdown signal, test time was about 10.000000 seconds 00:17:29.878 00:17:29.878 Latency(us) 00:17:29.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.878 =================================================================================================================== 00:17:29.878 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:29.878 22:00:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 4052099 00:17:31.276 22:00:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:31.276 22:00:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:31.569 22:00:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3de9ca3f-5a38-4cd3-be6f-2dd43c7bf77c 00:17:31.569 22:00:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:31.826 22:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:31.826 22:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:31.826 22:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:32.083 [2024-07-13 22:00:51.305039] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:32.083 22:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3de9ca3f-5a38-4cd3-be6f-2dd43c7bf77c 00:17:32.083 22:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:32.083 22:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3de9ca3f-5a38-4cd3-be6f-2dd43c7bf77c 00:17:32.083 22:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:32.083 22:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:32.083 22:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:32.083 22:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:32.083 22:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:32.083 22:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:32.083 22:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:32.083 22:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:32.083 22:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3de9ca3f-5a38-4cd3-be6f-2dd43c7bf77c 00:17:32.340 request: 00:17:32.340 { 00:17:32.341 "uuid": "3de9ca3f-5a38-4cd3-be6f-2dd43c7bf77c", 00:17:32.341 "method": "bdev_lvol_get_lvstores", 00:17:32.341 "req_id": 1 00:17:32.341 } 00:17:32.341 Got JSON-RPC error response 00:17:32.341 response: 00:17:32.341 { 00:17:32.341 "code": -19, 00:17:32.341 "message": "No such device" 00:17:32.341 } 00:17:32.341 22:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:32.341 22:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:32.341 22:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:32.341 22:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:32.341 22:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:32.598 aio_bdev 00:17:32.598 22:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 02a013da-3376-4a76-869b-587b94190c60 00:17:32.598 22:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=02a013da-3376-4a76-869b-587b94190c60 00:17:32.598 22:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:32.598 22:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:17:32.598 22:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:32.598 22:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:32.598 22:00:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:32.856 22:00:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 02a013da-3376-4a76-869b-587b94190c60 -t 2000 00:17:33.114 [ 00:17:33.114 { 00:17:33.114 "name": "02a013da-3376-4a76-869b-587b94190c60", 00:17:33.114 "aliases": [ 00:17:33.114 "lvs/lvol" 00:17:33.114 ], 00:17:33.114 "product_name": "Logical Volume", 00:17:33.114 "block_size": 4096, 00:17:33.114 "num_blocks": 38912, 00:17:33.114 "uuid": "02a013da-3376-4a76-869b-587b94190c60", 00:17:33.114 "assigned_rate_limits": { 00:17:33.114 "rw_ios_per_sec": 0, 00:17:33.114 "rw_mbytes_per_sec": 0, 00:17:33.114 "r_mbytes_per_sec": 0, 00:17:33.114 "w_mbytes_per_sec": 0 00:17:33.114 }, 00:17:33.114 "claimed": false, 00:17:33.114 "zoned": false, 00:17:33.114 "supported_io_types": { 00:17:33.114 "read": true, 00:17:33.114 "write": true, 00:17:33.114 "unmap": true, 00:17:33.114 "flush": false, 00:17:33.114 "reset": true, 00:17:33.114 "nvme_admin": false, 00:17:33.114 "nvme_io": false, 00:17:33.114 "nvme_io_md": false, 00:17:33.114 "write_zeroes": true, 00:17:33.114 "zcopy": false, 00:17:33.114 "get_zone_info": false, 00:17:33.114 "zone_management": false, 00:17:33.114 "zone_append": false, 00:17:33.114 "compare": false, 00:17:33.114 "compare_and_write": false, 00:17:33.114 "abort": false, 00:17:33.114 "seek_hole": true, 00:17:33.114 "seek_data": true, 00:17:33.114 "copy": false, 00:17:33.114 "nvme_iov_md": false 00:17:33.114 }, 00:17:33.114 "driver_specific": { 00:17:33.114 "lvol": { 00:17:33.114 "lvol_store_uuid": "3de9ca3f-5a38-4cd3-be6f-2dd43c7bf77c", 00:17:33.114 "base_bdev": "aio_bdev", 00:17:33.114 "thin_provision": false, 00:17:33.114 "num_allocated_clusters": 38, 00:17:33.114 "snapshot": false, 00:17:33.114 "clone": false, 00:17:33.114 "esnap_clone": false 00:17:33.114 } 00:17:33.114 } 00:17:33.114 } 00:17:33.114 ] 00:17:33.114 22:00:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:17:33.114 22:00:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3de9ca3f-5a38-4cd3-be6f-2dd43c7bf77c 00:17:33.114 22:00:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:33.372 22:00:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:33.372 22:00:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3de9ca3f-5a38-4cd3-be6f-2dd43c7bf77c 00:17:33.372 22:00:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:33.630 22:00:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:33.630 22:00:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 02a013da-3376-4a76-869b-587b94190c60 00:17:33.890 22:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3de9ca3f-5a38-4cd3-be6f-2dd43c7bf77c 00:17:34.148 22:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:34.406 22:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:34.406 00:17:34.406 real 0m18.972s 00:17:34.406 user 0m18.559s 00:17:34.406 sys 0m1.984s 00:17:34.406 22:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:34.406 22:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:34.406 ************************************ 00:17:34.406 END TEST lvs_grow_clean 00:17:34.406 ************************************ 00:17:34.406 22:00:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:34.406 22:00:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:34.406 22:00:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:34.406 22:00:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:34.406 22:00:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:34.406 ************************************ 00:17:34.406 START TEST lvs_grow_dirty 00:17:34.406 ************************************ 00:17:34.406 22:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:17:34.406 22:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:34.406 22:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:34.406 22:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:34.406 22:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:34.406 22:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:34.406 22:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:34.406 22:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:34.406 22:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:34.406 22:00:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:34.664 22:00:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:34.664 22:00:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:34.921 22:00:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ebfd87e0-32ae-4ab6-8167-1911a64de783 00:17:34.921 22:00:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ebfd87e0-32ae-4ab6-8167-1911a64de783 00:17:34.921 22:00:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:35.180 22:00:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:35.180 22:00:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:35.180 22:00:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ebfd87e0-32ae-4ab6-8167-1911a64de783 lvol 150 00:17:35.440 22:00:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=5be7ad90-8828-4401-85e4-e9f0a57dba1d 00:17:35.440 22:00:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:35.440 22:00:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:35.699 [2024-07-13 22:00:55.023557] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:35.699 [2024-07-13 22:00:55.023690] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:35.699 true 00:17:35.699 22:00:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ebfd87e0-32ae-4ab6-8167-1911a64de783 00:17:35.699 22:00:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:35.958 22:00:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:35.958 22:00:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:36.215 22:00:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5be7ad90-8828-4401-85e4-e9f0a57dba1d 00:17:36.472 22:00:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:36.729 [2024-07-13 22:00:56.095013] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:36.729 22:00:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:36.988 22:00:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4054368 00:17:36.988 22:00:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:36.988 22:00:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:36.988 22:00:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4054368 /var/tmp/bdevperf.sock 00:17:36.988 22:00:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 4054368 ']' 00:17:36.988 22:00:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:36.988 22:00:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:36.988 22:00:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:36.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:36.988 22:00:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:36.988 22:00:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:37.247 [2024-07-13 22:00:56.442587] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:37.247 [2024-07-13 22:00:56.442736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4054368 ] 00:17:37.247 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.247 [2024-07-13 22:00:56.571645] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.507 [2024-07-13 22:00:56.824533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.073 22:00:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:38.073 22:00:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:38.073 22:00:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:38.639 Nvme0n1 00:17:38.639 22:00:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:38.639 [ 00:17:38.639 { 00:17:38.639 "name": "Nvme0n1", 00:17:38.639 "aliases": [ 00:17:38.639 "5be7ad90-8828-4401-85e4-e9f0a57dba1d" 00:17:38.639 ], 00:17:38.639 "product_name": "NVMe disk", 00:17:38.639 "block_size": 4096, 00:17:38.639 "num_blocks": 38912, 00:17:38.639 "uuid": "5be7ad90-8828-4401-85e4-e9f0a57dba1d", 00:17:38.639 "assigned_rate_limits": { 00:17:38.639 "rw_ios_per_sec": 0, 00:17:38.639 "rw_mbytes_per_sec": 0, 00:17:38.639 "r_mbytes_per_sec": 0, 00:17:38.639 "w_mbytes_per_sec": 0 00:17:38.639 }, 00:17:38.639 "claimed": false, 00:17:38.639 "zoned": false, 00:17:38.639 "supported_io_types": { 00:17:38.639 "read": true, 00:17:38.639 "write": true, 00:17:38.639 "unmap": true, 00:17:38.639 "flush": true, 00:17:38.639 "reset": true, 00:17:38.639 "nvme_admin": true, 00:17:38.639 "nvme_io": true, 00:17:38.639 "nvme_io_md": false, 00:17:38.639 "write_zeroes": true, 00:17:38.639 "zcopy": false, 00:17:38.639 "get_zone_info": false, 00:17:38.639 "zone_management": false, 00:17:38.639 "zone_append": false, 00:17:38.639 "compare": true, 00:17:38.639 "compare_and_write": true, 00:17:38.639 "abort": true, 00:17:38.639 "seek_hole": false, 00:17:38.639 "seek_data": false, 00:17:38.639 "copy": true, 00:17:38.639 "nvme_iov_md": false 00:17:38.639 }, 00:17:38.639 "memory_domains": [ 00:17:38.639 { 00:17:38.639 "dma_device_id": "system", 00:17:38.639 "dma_device_type": 1 00:17:38.639 } 00:17:38.639 ], 00:17:38.639 "driver_specific": { 00:17:38.639 "nvme": [ 00:17:38.639 { 00:17:38.639 "trid": { 00:17:38.639 "trtype": "TCP", 00:17:38.639 "adrfam": "IPv4", 00:17:38.639 "traddr": "10.0.0.2", 00:17:38.639 "trsvcid": "4420", 00:17:38.639 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:38.639 }, 00:17:38.639 "ctrlr_data": { 00:17:38.639 "cntlid": 1, 00:17:38.639 "vendor_id": "0x8086", 00:17:38.639 "model_number": "SPDK bdev Controller", 00:17:38.639 "serial_number": "SPDK0", 00:17:38.639 "firmware_revision": "24.09", 00:17:38.639 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:38.639 "oacs": { 00:17:38.639 "security": 0, 00:17:38.639 "format": 0, 00:17:38.639 "firmware": 0, 00:17:38.639 "ns_manage": 0 00:17:38.639 }, 00:17:38.639 "multi_ctrlr": true, 00:17:38.639 "ana_reporting": false 00:17:38.639 }, 00:17:38.639 "vs": { 00:17:38.639 "nvme_version": "1.3" 00:17:38.639 }, 00:17:38.639 "ns_data": { 00:17:38.639 "id": 1, 00:17:38.639 "can_share": true 00:17:38.639 } 00:17:38.639 } 00:17:38.639 ], 00:17:38.639 "mp_policy": "active_passive" 00:17:38.639 } 00:17:38.639 } 00:17:38.639 ] 00:17:38.639 22:00:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4054549 00:17:38.639 22:00:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:38.639 22:00:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:38.898 Running I/O for 10 seconds... 00:17:39.850 Latency(us) 00:17:39.850 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.850 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:39.850 Nvme0n1 : 1.00 10780.00 42.11 0.00 0.00 0.00 0.00 0.00 00:17:39.850 =================================================================================================================== 00:17:39.850 Total : 10780.00 42.11 0.00 0.00 0.00 0.00 0.00 00:17:39.850 00:17:40.787 22:01:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ebfd87e0-32ae-4ab6-8167-1911a64de783 00:17:40.787 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:40.787 Nvme0n1 : 2.00 10842.00 42.35 0.00 0.00 0.00 0.00 0.00 00:17:40.787 =================================================================================================================== 00:17:40.787 Total : 10842.00 42.35 0.00 0.00 0.00 0.00 0.00 00:17:40.787 00:17:41.057 true 00:17:41.057 22:01:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ebfd87e0-32ae-4ab6-8167-1911a64de783 00:17:41.057 22:01:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:41.318 22:01:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:41.318 22:01:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:41.318 22:01:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4054549 00:17:41.887 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:41.887 Nvme0n1 : 3.00 10862.33 42.43 0.00 0.00 0.00 0.00 0.00 00:17:41.887 =================================================================================================================== 00:17:41.887 Total : 10862.33 42.43 0.00 0.00 0.00 0.00 0.00 00:17:41.887 00:17:42.837 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:42.837 Nvme0n1 : 4.00 10909.25 42.61 0.00 0.00 0.00 0.00 0.00 00:17:42.837 =================================================================================================================== 00:17:42.837 Total : 10909.25 42.61 0.00 0.00 0.00 0.00 0.00 00:17:42.837 00:17:43.776 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:43.776 Nvme0n1 : 5.00 10945.60 42.76 0.00 0.00 0.00 0.00 0.00 00:17:43.777 =================================================================================================================== 00:17:43.777 Total : 10945.60 42.76 0.00 0.00 0.00 0.00 0.00 00:17:43.777 00:17:45.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:45.154 Nvme0n1 : 6.00 10991.50 42.94 0.00 0.00 0.00 0.00 0.00 00:17:45.154 =================================================================================================================== 00:17:45.154 Total : 10991.50 42.94 0.00 0.00 0.00 0.00 0.00 00:17:45.154 00:17:45.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:45.756 Nvme0n1 : 7.00 11026.29 43.07 0.00 0.00 0.00 0.00 0.00 00:17:45.756 =================================================================================================================== 00:17:45.756 Total : 11026.29 43.07 0.00 0.00 0.00 0.00 0.00 00:17:45.756 00:17:47.134 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:47.134 Nvme0n1 : 8.00 11043.62 43.14 0.00 0.00 0.00 0.00 0.00 00:17:47.134 =================================================================================================================== 00:17:47.134 Total : 11043.62 43.14 0.00 0.00 0.00 0.00 0.00 00:17:47.134 00:17:48.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:48.072 Nvme0n1 : 9.00 11063.11 43.22 0.00 0.00 0.00 0.00 0.00 00:17:48.072 =================================================================================================================== 00:17:48.072 Total : 11063.11 43.22 0.00 0.00 0.00 0.00 0.00 00:17:48.072 00:17:49.010 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:49.010 Nvme0n1 : 10.00 11066.90 43.23 0.00 0.00 0.00 0.00 0.00 00:17:49.010 =================================================================================================================== 00:17:49.010 Total : 11066.90 43.23 0.00 0.00 0.00 0.00 0.00 00:17:49.010 00:17:49.010 00:17:49.010 Latency(us) 00:17:49.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.010 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:49.010 Nvme0n1 : 10.00 11075.57 43.26 0.00 0.00 11550.13 2827.76 23495.87 00:17:49.010 =================================================================================================================== 00:17:49.010 Total : 11075.57 43.26 0.00 0.00 11550.13 2827.76 23495.87 00:17:49.010 0 00:17:49.010 22:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4054368 00:17:49.010 22:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 4054368 ']' 00:17:49.010 22:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 4054368 00:17:49.010 22:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:17:49.010 22:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:49.010 22:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4054368 00:17:49.010 22:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:49.010 22:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:49.010 22:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4054368' 00:17:49.010 killing process with pid 4054368 00:17:49.010 22:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 4054368 00:17:49.010 Received shutdown signal, test time was about 10.000000 seconds 00:17:49.010 00:17:49.010 Latency(us) 00:17:49.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.010 =================================================================================================================== 00:17:49.010 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:49.010 22:01:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 4054368 00:17:49.948 22:01:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:50.206 22:01:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:50.463 22:01:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ebfd87e0-32ae-4ab6-8167-1911a64de783 00:17:50.463 22:01:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:50.722 22:01:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:50.722 22:01:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:50.722 22:01:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4051542 00:17:50.722 22:01:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4051542 00:17:50.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4051542 Killed "${NVMF_APP[@]}" "$@" 00:17:50.722 22:01:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:50.722 22:01:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:50.722 22:01:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:50.722 22:01:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:50.722 22:01:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:50.722 22:01:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=4056003 00:17:50.722 22:01:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:50.722 22:01:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 4056003 00:17:50.722 22:01:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 4056003 ']' 00:17:50.722 22:01:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.722 22:01:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:50.722 22:01:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.722 22:01:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:50.722 22:01:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:50.982 [2024-07-13 22:01:10.176938] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:50.982 [2024-07-13 22:01:10.177106] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.982 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.982 [2024-07-13 22:01:10.330241] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.241 [2024-07-13 22:01:10.587284] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.241 [2024-07-13 22:01:10.587369] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.241 [2024-07-13 22:01:10.587400] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.241 [2024-07-13 22:01:10.587425] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.241 [2024-07-13 22:01:10.587447] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.241 [2024-07-13 22:01:10.587497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.809 22:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:51.809 22:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:51.809 22:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:51.809 22:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:51.809 22:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:51.809 22:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.809 22:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:52.066 [2024-07-13 22:01:11.372899] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:52.066 [2024-07-13 22:01:11.373151] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:52.066 [2024-07-13 22:01:11.373239] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:52.066 22:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:52.066 22:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 5be7ad90-8828-4401-85e4-e9f0a57dba1d 00:17:52.066 22:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=5be7ad90-8828-4401-85e4-e9f0a57dba1d 00:17:52.066 22:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:52.066 22:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:17:52.066 22:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:52.066 22:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:52.066 22:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:52.324 22:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5be7ad90-8828-4401-85e4-e9f0a57dba1d -t 2000 00:17:52.583 [ 00:17:52.583 { 00:17:52.583 "name": "5be7ad90-8828-4401-85e4-e9f0a57dba1d", 00:17:52.583 "aliases": [ 00:17:52.583 "lvs/lvol" 00:17:52.583 ], 00:17:52.583 "product_name": "Logical Volume", 00:17:52.583 "block_size": 4096, 00:17:52.583 "num_blocks": 38912, 00:17:52.583 "uuid": "5be7ad90-8828-4401-85e4-e9f0a57dba1d", 00:17:52.583 "assigned_rate_limits": { 00:17:52.583 "rw_ios_per_sec": 0, 00:17:52.583 "rw_mbytes_per_sec": 0, 00:17:52.583 "r_mbytes_per_sec": 0, 00:17:52.583 "w_mbytes_per_sec": 0 00:17:52.583 }, 00:17:52.583 "claimed": false, 00:17:52.583 "zoned": false, 00:17:52.583 "supported_io_types": { 00:17:52.583 "read": true, 00:17:52.583 "write": true, 00:17:52.583 "unmap": true, 00:17:52.583 "flush": false, 00:17:52.583 "reset": true, 00:17:52.583 "nvme_admin": false, 00:17:52.583 "nvme_io": false, 00:17:52.583 "nvme_io_md": false, 00:17:52.583 "write_zeroes": true, 00:17:52.583 "zcopy": false, 00:17:52.583 "get_zone_info": false, 00:17:52.583 "zone_management": false, 00:17:52.583 "zone_append": false, 00:17:52.583 "compare": false, 00:17:52.583 "compare_and_write": false, 00:17:52.583 "abort": false, 00:17:52.583 "seek_hole": true, 00:17:52.583 "seek_data": true, 00:17:52.583 "copy": false, 00:17:52.583 "nvme_iov_md": false 00:17:52.583 }, 00:17:52.583 "driver_specific": { 00:17:52.583 "lvol": { 00:17:52.583 "lvol_store_uuid": "ebfd87e0-32ae-4ab6-8167-1911a64de783", 00:17:52.583 "base_bdev": "aio_bdev", 00:17:52.583 "thin_provision": false, 00:17:52.583 "num_allocated_clusters": 38, 00:17:52.583 "snapshot": false, 00:17:52.583 "clone": false, 00:17:52.583 "esnap_clone": false 00:17:52.583 } 00:17:52.583 } 00:17:52.583 } 00:17:52.583 ] 00:17:52.842 22:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:17:52.842 22:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ebfd87e0-32ae-4ab6-8167-1911a64de783 00:17:52.842 22:01:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:52.842 22:01:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:52.842 22:01:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ebfd87e0-32ae-4ab6-8167-1911a64de783 00:17:52.842 22:01:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:53.410 22:01:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:53.410 22:01:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:53.410 [2024-07-13 22:01:12.785863] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:53.669 22:01:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ebfd87e0-32ae-4ab6-8167-1911a64de783 00:17:53.669 22:01:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:53.669 22:01:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ebfd87e0-32ae-4ab6-8167-1911a64de783 00:17:53.669 22:01:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:53.669 22:01:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.669 22:01:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:53.669 22:01:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.669 22:01:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:53.669 22:01:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:53.669 22:01:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:53.669 22:01:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:53.669 22:01:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ebfd87e0-32ae-4ab6-8167-1911a64de783 00:17:53.926 request: 00:17:53.926 { 00:17:53.926 "uuid": "ebfd87e0-32ae-4ab6-8167-1911a64de783", 00:17:53.926 "method": "bdev_lvol_get_lvstores", 00:17:53.926 "req_id": 1 00:17:53.926 } 00:17:53.926 Got JSON-RPC error response 00:17:53.926 response: 00:17:53.926 { 00:17:53.926 "code": -19, 00:17:53.926 "message": "No such device" 00:17:53.926 } 00:17:53.926 22:01:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:53.926 22:01:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:53.926 22:01:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:53.926 22:01:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:53.926 22:01:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:54.184 aio_bdev 00:17:54.184 22:01:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5be7ad90-8828-4401-85e4-e9f0a57dba1d 00:17:54.184 22:01:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=5be7ad90-8828-4401-85e4-e9f0a57dba1d 00:17:54.184 22:01:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:54.184 22:01:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:17:54.184 22:01:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:54.184 22:01:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:54.184 22:01:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:54.441 22:01:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5be7ad90-8828-4401-85e4-e9f0a57dba1d -t 2000 00:17:54.699 [ 00:17:54.699 { 00:17:54.699 "name": "5be7ad90-8828-4401-85e4-e9f0a57dba1d", 00:17:54.699 "aliases": [ 00:17:54.699 "lvs/lvol" 00:17:54.699 ], 00:17:54.699 "product_name": "Logical Volume", 00:17:54.699 "block_size": 4096, 00:17:54.699 "num_blocks": 38912, 00:17:54.699 "uuid": "5be7ad90-8828-4401-85e4-e9f0a57dba1d", 00:17:54.699 "assigned_rate_limits": { 00:17:54.699 "rw_ios_per_sec": 0, 00:17:54.699 "rw_mbytes_per_sec": 0, 00:17:54.699 "r_mbytes_per_sec": 0, 00:17:54.699 "w_mbytes_per_sec": 0 00:17:54.699 }, 00:17:54.699 "claimed": false, 00:17:54.699 "zoned": false, 00:17:54.699 "supported_io_types": { 00:17:54.699 "read": true, 00:17:54.699 "write": true, 00:17:54.699 "unmap": true, 00:17:54.699 "flush": false, 00:17:54.699 "reset": true, 00:17:54.699 "nvme_admin": false, 00:17:54.699 "nvme_io": false, 00:17:54.699 "nvme_io_md": false, 00:17:54.699 "write_zeroes": true, 00:17:54.699 "zcopy": false, 00:17:54.699 "get_zone_info": false, 00:17:54.699 "zone_management": false, 00:17:54.699 "zone_append": false, 00:17:54.699 "compare": false, 00:17:54.699 "compare_and_write": false, 00:17:54.699 "abort": false, 00:17:54.699 "seek_hole": true, 00:17:54.699 "seek_data": true, 00:17:54.699 "copy": false, 00:17:54.699 "nvme_iov_md": false 00:17:54.699 }, 00:17:54.699 "driver_specific": { 00:17:54.699 "lvol": { 00:17:54.699 "lvol_store_uuid": "ebfd87e0-32ae-4ab6-8167-1911a64de783", 00:17:54.699 "base_bdev": "aio_bdev", 00:17:54.699 "thin_provision": false, 00:17:54.699 "num_allocated_clusters": 38, 00:17:54.699 "snapshot": false, 00:17:54.699 "clone": false, 00:17:54.699 "esnap_clone": false 00:17:54.699 } 00:17:54.699 } 00:17:54.699 } 00:17:54.699 ] 00:17:54.699 22:01:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:17:54.699 22:01:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ebfd87e0-32ae-4ab6-8167-1911a64de783 00:17:54.699 22:01:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:54.958 22:01:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:54.958 22:01:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ebfd87e0-32ae-4ab6-8167-1911a64de783 00:17:54.958 22:01:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:55.217 22:01:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:55.217 22:01:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5be7ad90-8828-4401-85e4-e9f0a57dba1d 00:17:55.476 22:01:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ebfd87e0-32ae-4ab6-8167-1911a64de783 00:17:55.733 22:01:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:55.990 22:01:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:55.990 00:17:55.990 real 0m21.598s 00:17:55.990 user 0m54.168s 00:17:55.990 sys 0m4.706s 00:17:55.990 22:01:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:55.990 22:01:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:55.990 ************************************ 00:17:55.990 END TEST lvs_grow_dirty 00:17:55.990 ************************************ 00:17:55.990 22:01:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:55.990 22:01:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:55.990 22:01:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:17:55.990 22:01:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:17:55.990 22:01:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:55.990 22:01:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:55.990 22:01:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:55.990 22:01:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:55.990 22:01:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:55.990 22:01:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:55.990 nvmf_trace.0 00:17:56.247 22:01:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:17:56.247 22:01:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:56.247 22:01:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:56.247 22:01:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:56.247 22:01:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:56.247 22:01:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:56.247 22:01:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:56.247 22:01:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:56.247 rmmod nvme_tcp 00:17:56.247 rmmod nvme_fabrics 00:17:56.247 rmmod nvme_keyring 00:17:56.247 22:01:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:56.247 22:01:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:56.247 22:01:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:56.247 22:01:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 4056003 ']' 00:17:56.247 22:01:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 4056003 00:17:56.248 22:01:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 4056003 ']' 00:17:56.248 22:01:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 4056003 00:17:56.248 22:01:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:17:56.248 22:01:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:56.248 22:01:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4056003 00:17:56.248 22:01:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:56.248 22:01:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:56.248 22:01:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4056003' 00:17:56.248 killing process with pid 4056003 00:17:56.248 22:01:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 4056003 00:17:56.248 22:01:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 4056003 00:17:57.625 22:01:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:57.625 22:01:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:57.625 22:01:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:57.625 22:01:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:57.625 22:01:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:57.625 22:01:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.625 22:01:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:57.625 22:01:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.529 22:01:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:59.529 00:17:59.529 real 0m47.533s 00:17:59.529 user 1m20.473s 00:17:59.529 sys 0m8.696s 00:17:59.529 22:01:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:59.529 22:01:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:59.529 ************************************ 00:17:59.529 END TEST nvmf_lvs_grow 00:17:59.529 ************************************ 00:17:59.529 22:01:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:59.529 22:01:18 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:59.529 22:01:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:59.529 22:01:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:59.529 22:01:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:59.529 ************************************ 00:17:59.529 START TEST nvmf_bdev_io_wait 00:17:59.529 ************************************ 00:17:59.529 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:59.786 * Looking for test storage... 00:17:59.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:59.786 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:59.786 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:59.786 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:59.786 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:59.786 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:59.786 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:59.786 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:59.786 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:59.786 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:59.786 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:59.786 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:59.787 22:01:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:01.704 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:01.704 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:01.704 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:01.704 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:01.705 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:01.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:01.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:18:01.705 00:18:01.705 --- 10.0.0.2 ping statistics --- 00:18:01.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.705 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:01.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:01.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:18:01.705 00:18:01.705 --- 10.0.0.1 ping statistics --- 00:18:01.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:01.705 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:01.705 22:01:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:01.705 22:01:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:01.705 22:01:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:01.705 22:01:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:01.705 22:01:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:01.705 22:01:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=4058665 00:18:01.705 22:01:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:01.705 22:01:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 4058665 00:18:01.705 22:01:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 4058665 ']' 00:18:01.705 22:01:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.705 22:01:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:01.705 22:01:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.705 22:01:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:01.705 22:01:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:01.963 [2024-07-13 22:01:21.103753] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:01.963 [2024-07-13 22:01:21.103920] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.963 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.963 [2024-07-13 22:01:21.239078] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:02.222 [2024-07-13 22:01:21.504032] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.222 [2024-07-13 22:01:21.504092] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.222 [2024-07-13 22:01:21.504131] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.222 [2024-07-13 22:01:21.504150] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.222 [2024-07-13 22:01:21.504183] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.222 [2024-07-13 22:01:21.506921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.222 [2024-07-13 22:01:21.506987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.222 [2024-07-13 22:01:21.507029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.222 [2024-07-13 22:01:21.507052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:02.788 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:02.788 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:18:02.788 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:02.788 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:02.788 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:02.788 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.788 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:02.788 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.788 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:02.788 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.788 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:02.788 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.788 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:03.046 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.046 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:03.046 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.046 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:03.046 [2024-07-13 22:01:22.332955] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.046 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.046 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:03.046 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.046 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:03.046 Malloc0 00:18:03.046 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.046 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:03.046 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.046 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:03.046 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.046 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:03.046 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.046 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:03.325 [2024-07-13 22:01:22.448812] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4058855 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4058858 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:03.325 { 00:18:03.325 "params": { 00:18:03.325 "name": "Nvme$subsystem", 00:18:03.325 "trtype": "$TEST_TRANSPORT", 00:18:03.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:03.325 "adrfam": "ipv4", 00:18:03.325 "trsvcid": "$NVMF_PORT", 00:18:03.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:03.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:03.325 "hdgst": ${hdgst:-false}, 00:18:03.325 "ddgst": ${ddgst:-false} 00:18:03.325 }, 00:18:03.325 "method": "bdev_nvme_attach_controller" 00:18:03.325 } 00:18:03.325 EOF 00:18:03.325 )") 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4058860 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4058864 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:03.325 { 00:18:03.325 "params": { 00:18:03.325 "name": "Nvme$subsystem", 00:18:03.325 "trtype": "$TEST_TRANSPORT", 00:18:03.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:03.325 "adrfam": "ipv4", 00:18:03.325 "trsvcid": "$NVMF_PORT", 00:18:03.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:03.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:03.325 "hdgst": ${hdgst:-false}, 00:18:03.325 "ddgst": ${ddgst:-false} 00:18:03.325 }, 00:18:03.325 "method": "bdev_nvme_attach_controller" 00:18:03.325 } 00:18:03.325 EOF 00:18:03.325 )") 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:03.325 { 00:18:03.325 "params": { 00:18:03.325 "name": "Nvme$subsystem", 00:18:03.325 "trtype": "$TEST_TRANSPORT", 00:18:03.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:03.325 "adrfam": "ipv4", 00:18:03.325 "trsvcid": "$NVMF_PORT", 00:18:03.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:03.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:03.325 "hdgst": ${hdgst:-false}, 00:18:03.325 "ddgst": ${ddgst:-false} 00:18:03.325 }, 00:18:03.325 "method": "bdev_nvme_attach_controller" 00:18:03.325 } 00:18:03.325 EOF 00:18:03.325 )") 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:03.325 { 00:18:03.325 "params": { 00:18:03.325 "name": "Nvme$subsystem", 00:18:03.325 "trtype": "$TEST_TRANSPORT", 00:18:03.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:03.325 "adrfam": "ipv4", 00:18:03.325 "trsvcid": "$NVMF_PORT", 00:18:03.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:03.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:03.325 "hdgst": ${hdgst:-false}, 00:18:03.325 "ddgst": ${ddgst:-false} 00:18:03.325 }, 00:18:03.325 "method": "bdev_nvme_attach_controller" 00:18:03.325 } 00:18:03.325 EOF 00:18:03.325 )") 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4058855 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:03.325 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:03.326 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:03.326 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:03.326 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:18:03.326 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:03.326 "params": { 00:18:03.326 "name": "Nvme1", 00:18:03.326 "trtype": "tcp", 00:18:03.326 "traddr": "10.0.0.2", 00:18:03.326 "adrfam": "ipv4", 00:18:03.326 "trsvcid": "4420", 00:18:03.326 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.326 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.326 "hdgst": false, 00:18:03.326 "ddgst": false 00:18:03.326 }, 00:18:03.326 "method": "bdev_nvme_attach_controller" 00:18:03.326 }' 00:18:03.326 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:03.326 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:03.326 "params": { 00:18:03.326 "name": "Nvme1", 00:18:03.326 "trtype": "tcp", 00:18:03.326 "traddr": "10.0.0.2", 00:18:03.326 "adrfam": "ipv4", 00:18:03.326 "trsvcid": "4420", 00:18:03.326 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.326 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.326 "hdgst": false, 00:18:03.326 "ddgst": false 00:18:03.326 }, 00:18:03.326 "method": "bdev_nvme_attach_controller" 00:18:03.326 }' 00:18:03.326 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:03.326 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:03.326 "params": { 00:18:03.326 "name": "Nvme1", 00:18:03.326 "trtype": "tcp", 00:18:03.326 "traddr": "10.0.0.2", 00:18:03.326 "adrfam": "ipv4", 00:18:03.326 "trsvcid": "4420", 00:18:03.326 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.326 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.326 "hdgst": false, 00:18:03.326 "ddgst": false 00:18:03.326 }, 00:18:03.326 "method": "bdev_nvme_attach_controller" 00:18:03.326 }' 00:18:03.326 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:18:03.326 22:01:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:03.326 "params": { 00:18:03.326 "name": "Nvme1", 00:18:03.326 "trtype": "tcp", 00:18:03.326 "traddr": "10.0.0.2", 00:18:03.326 "adrfam": "ipv4", 00:18:03.326 "trsvcid": "4420", 00:18:03.326 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.326 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.326 "hdgst": false, 00:18:03.326 "ddgst": false 00:18:03.326 }, 00:18:03.326 "method": "bdev_nvme_attach_controller" 00:18:03.326 }' 00:18:03.326 [2024-07-13 22:01:22.536270] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:03.326 [2024-07-13 22:01:22.536364] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:03.326 [2024-07-13 22:01:22.536364] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:03.326 [2024-07-13 22:01:22.536364] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:03.326 [2024-07-13 22:01:22.536435] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:03.326 [2024-07-13 22:01:22.536511] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-13 22:01:22.536512] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-13 22:01:22.536514] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:03.326 --proc-type=auto ] 00:18:03.326 --proc-type=auto ] 00:18:03.326 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.591 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.591 [2024-07-13 22:01:22.781501] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.591 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.591 [2024-07-13 22:01:22.880943] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.591 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.848 [2024-07-13 22:01:22.987290] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.848 [2024-07-13 22:01:23.004378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:03.848 [2024-07-13 22:01:23.062668] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.848 [2024-07-13 22:01:23.107783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:03.848 [2024-07-13 22:01:23.212003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:04.105 [2024-07-13 22:01:23.276989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:18:04.366 Running I/O for 1 seconds... 00:18:04.366 Running I/O for 1 seconds... 00:18:04.366 Running I/O for 1 seconds... 00:18:04.626 Running I/O for 1 seconds... 00:18:05.193 00:18:05.193 Latency(us) 00:18:05.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.193 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:05.193 Nvme1n1 : 1.00 153473.08 599.50 0.00 0.00 831.02 339.82 1146.88 00:18:05.193 =================================================================================================================== 00:18:05.193 Total : 153473.08 599.50 0.00 0.00 831.02 339.82 1146.88 00:18:05.455 00:18:05.455 Latency(us) 00:18:05.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.455 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:05.455 Nvme1n1 : 1.02 5304.79 20.72 0.00 0.00 23909.23 9466.31 38447.79 00:18:05.455 =================================================================================================================== 00:18:05.455 Total : 5304.79 20.72 0.00 0.00 23909.23 9466.31 38447.79 00:18:05.455 00:18:05.455 Latency(us) 00:18:05.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.455 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:05.455 Nvme1n1 : 1.01 5488.88 21.44 0.00 0.00 23155.26 3640.89 33593.27 00:18:05.455 =================================================================================================================== 00:18:05.455 Total : 5488.88 21.44 0.00 0.00 23155.26 3640.89 33593.27 00:18:05.455 00:18:05.455 Latency(us) 00:18:05.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.455 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:05.455 Nvme1n1 : 1.01 5405.07 21.11 0.00 0.00 23588.07 6747.78 49321.91 00:18:05.455 =================================================================================================================== 00:18:05.455 Total : 5405.07 21.11 0.00 0.00 23588.07 6747.78 49321.91 00:18:06.395 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4058858 00:18:06.395 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4058860 00:18:06.395 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4058864 00:18:06.653 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:06.653 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.653 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:06.653 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.653 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:06.653 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:06.653 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:06.653 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:18:06.653 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:06.653 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:18:06.653 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:06.653 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:06.653 rmmod nvme_tcp 00:18:06.653 rmmod nvme_fabrics 00:18:06.653 rmmod nvme_keyring 00:18:06.653 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:06.653 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:18:06.653 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:18:06.653 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 4058665 ']' 00:18:06.653 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 4058665 00:18:06.653 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 4058665 ']' 00:18:06.653 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 4058665 00:18:06.653 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:18:06.653 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:06.653 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4058665 00:18:06.653 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:06.653 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:06.653 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4058665' 00:18:06.653 killing process with pid 4058665 00:18:06.653 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 4058665 00:18:06.653 22:01:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 4058665 00:18:08.027 22:01:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:08.027 22:01:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:08.027 22:01:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:08.027 22:01:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:08.027 22:01:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:08.027 22:01:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.027 22:01:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.027 22:01:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.927 22:01:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:09.927 00:18:09.927 real 0m10.299s 00:18:09.927 user 0m31.680s 00:18:09.927 sys 0m3.982s 00:18:09.927 22:01:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:09.927 22:01:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:09.927 ************************************ 00:18:09.927 END TEST nvmf_bdev_io_wait 00:18:09.927 ************************************ 00:18:09.927 22:01:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:09.927 22:01:29 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:09.927 22:01:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:09.927 22:01:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:09.927 22:01:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:09.927 ************************************ 00:18:09.927 START TEST nvmf_queue_depth 00:18:09.927 ************************************ 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:09.927 * Looking for test storage... 00:18:09.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:18:09.927 22:01:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:12.456 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:12.456 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:18:12.456 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:12.456 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:12.456 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:12.456 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:12.456 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:12.457 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:12.457 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:12.457 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:12.457 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:12.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:12.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:18:12.457 00:18:12.457 --- 10.0.0.2 ping statistics --- 00:18:12.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.457 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:12.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:12.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:18:12.457 00:18:12.457 --- 10.0.0.1 ping statistics --- 00:18:12.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.457 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=4061330 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 4061330 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 4061330 ']' 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:12.457 22:01:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:12.457 [2024-07-13 22:01:31.599241] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:12.457 [2024-07-13 22:01:31.599373] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:12.457 EAL: No free 2048 kB hugepages reported on node 1 00:18:12.457 [2024-07-13 22:01:31.737426] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.716 [2024-07-13 22:01:31.992824] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:12.716 [2024-07-13 22:01:31.992926] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:12.716 [2024-07-13 22:01:31.992956] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:12.716 [2024-07-13 22:01:31.992981] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:12.716 [2024-07-13 22:01:31.993002] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:12.716 [2024-07-13 22:01:31.993055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:13.280 [2024-07-13 22:01:32.534756] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:13.280 Malloc0 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:13.280 [2024-07-13 22:01:32.657998] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4061516 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4061516 /var/tmp/bdevperf.sock 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 4061516 ']' 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:13.280 22:01:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:13.538 [2024-07-13 22:01:32.738410] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:13.538 [2024-07-13 22:01:32.738546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4061516 ] 00:18:13.538 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.538 [2024-07-13 22:01:32.867604] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.797 [2024-07-13 22:01:33.114450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.362 22:01:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:14.362 22:01:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:18:14.362 22:01:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:14.362 22:01:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.362 22:01:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:14.619 NVMe0n1 00:18:14.619 22:01:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.619 22:01:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:14.877 Running I/O for 10 seconds... 00:18:24.881 00:18:24.881 Latency(us) 00:18:24.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.881 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:24.881 Verification LBA range: start 0x0 length 0x4000 00:18:24.881 NVMe0n1 : 10.15 6141.23 23.99 0.00 0.00 165800.17 27379.48 103304.15 00:18:24.882 =================================================================================================================== 00:18:24.882 Total : 6141.23 23.99 0.00 0.00 165800.17 27379.48 103304.15 00:18:25.139 0 00:18:25.139 22:01:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4061516 00:18:25.139 22:01:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 4061516 ']' 00:18:25.139 22:01:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 4061516 00:18:25.139 22:01:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:25.139 22:01:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:25.139 22:01:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4061516 00:18:25.139 22:01:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:25.139 22:01:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:25.139 22:01:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4061516' 00:18:25.139 killing process with pid 4061516 00:18:25.139 22:01:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 4061516 00:18:25.139 Received shutdown signal, test time was about 10.000000 seconds 00:18:25.140 00:18:25.140 Latency(us) 00:18:25.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.140 =================================================================================================================== 00:18:25.140 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:25.140 22:01:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 4061516 00:18:26.072 22:01:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:26.072 22:01:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:26.072 22:01:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:26.072 22:01:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:26.072 22:01:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:26.072 22:01:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:26.072 22:01:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:26.072 22:01:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:26.072 rmmod nvme_tcp 00:18:26.072 rmmod nvme_fabrics 00:18:26.072 rmmod nvme_keyring 00:18:26.072 22:01:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:26.072 22:01:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:26.072 22:01:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:26.072 22:01:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 4061330 ']' 00:18:26.072 22:01:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 4061330 00:18:26.072 22:01:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 4061330 ']' 00:18:26.072 22:01:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 4061330 00:18:26.072 22:01:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:26.072 22:01:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:26.072 22:01:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4061330 00:18:26.072 22:01:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:26.072 22:01:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:26.072 22:01:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4061330' 00:18:26.072 killing process with pid 4061330 00:18:26.072 22:01:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 4061330 00:18:26.072 22:01:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 4061330 00:18:27.971 22:01:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:27.971 22:01:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:27.971 22:01:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:27.971 22:01:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:27.971 22:01:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:27.971 22:01:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:27.971 22:01:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:27.971 22:01:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:29.870 22:01:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:29.870 00:18:29.870 real 0m19.705s 00:18:29.870 user 0m28.117s 00:18:29.870 sys 0m3.367s 00:18:29.870 22:01:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:29.870 22:01:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:29.870 ************************************ 00:18:29.870 END TEST nvmf_queue_depth 00:18:29.870 ************************************ 00:18:29.870 22:01:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:29.870 22:01:48 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:29.870 22:01:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:29.870 22:01:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:29.870 22:01:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:29.870 ************************************ 00:18:29.870 START TEST nvmf_target_multipath 00:18:29.870 ************************************ 00:18:29.870 22:01:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:29.870 * Looking for test storage... 00:18:29.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:29.870 22:01:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:31.768 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:31.768 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:31.768 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:31.768 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:31.768 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:31.768 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:31.768 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:31.768 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:31.768 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:31.768 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:31.768 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:31.768 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:31.769 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:31.769 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:31.769 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:31.769 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:31.769 22:01:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:31.769 22:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:31.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:18:31.769 00:18:31.769 --- 10.0.0.2 ping statistics --- 00:18:31.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.769 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:18:31.769 22:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:31.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:18:31.769 00:18:31.769 --- 10.0.0.1 ping statistics --- 00:18:31.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.769 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:18:31.769 22:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.769 22:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:31.769 22:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:31.769 22:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.769 22:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:31.769 22:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:31.769 22:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.769 22:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:31.769 22:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:31.769 22:01:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:31.769 22:01:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:31.769 only one NIC for nvmf test 00:18:31.769 22:01:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:31.769 22:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:31.769 22:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:31.769 22:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:31.769 22:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:31.769 22:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:31.769 22:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:31.769 rmmod nvme_tcp 00:18:31.769 rmmod nvme_fabrics 00:18:31.769 rmmod nvme_keyring 00:18:31.769 22:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:31.770 22:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:31.770 22:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:31.770 22:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:31.770 22:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:31.770 22:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:31.770 22:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:31.770 22:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:31.770 22:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:31.770 22:01:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.770 22:01:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.770 22:01:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.300 22:01:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:34.300 22:01:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:34.300 22:01:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:34.300 22:01:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:34.300 22:01:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:34.300 22:01:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:34.300 22:01:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:34.300 22:01:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:34.300 22:01:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:34.300 22:01:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:34.300 22:01:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:34.300 22:01:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:34.300 22:01:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:34.300 22:01:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:34.300 22:01:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:34.300 22:01:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:34.300 22:01:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:34.300 22:01:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:34.300 22:01:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.300 22:01:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:34.300 22:01:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.300 22:01:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:34.300 00:18:34.300 real 0m4.150s 00:18:34.300 user 0m0.737s 00:18:34.300 sys 0m1.392s 00:18:34.300 22:01:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:34.300 22:01:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:34.300 ************************************ 00:18:34.300 END TEST nvmf_target_multipath 00:18:34.300 ************************************ 00:18:34.300 22:01:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:34.300 22:01:53 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:34.300 22:01:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:34.300 22:01:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:34.300 22:01:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:34.300 ************************************ 00:18:34.300 START TEST nvmf_zcopy 00:18:34.300 ************************************ 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:34.300 * Looking for test storage... 00:18:34.300 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:34.300 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.301 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.301 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.301 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:34.301 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:34.301 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:34.301 22:01:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:34.301 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:34.301 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:34.301 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:34.301 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:34.301 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:34.301 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.301 22:01:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:34.301 22:01:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.301 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:34.301 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:34.301 22:01:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:34.301 22:01:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:36.200 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:36.200 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:36.200 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:36.200 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:36.200 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:36.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:36.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:18:36.201 00:18:36.201 --- 10.0.0.2 ping statistics --- 00:18:36.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.201 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:36.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:36.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:18:36.201 00:18:36.201 --- 10.0.0.1 ping statistics --- 00:18:36.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.201 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=4066902 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 4066902 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 4066902 ']' 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:36.201 22:01:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:36.201 [2024-07-13 22:01:55.521072] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:36.201 [2024-07-13 22:01:55.521233] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.459 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.459 [2024-07-13 22:01:55.657993] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.932 [2024-07-13 22:01:55.910204] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.932 [2024-07-13 22:01:55.910290] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.932 [2024-07-13 22:01:55.910321] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.932 [2024-07-13 22:01:55.910347] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.932 [2024-07-13 22:01:55.910369] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.932 [2024-07-13 22:01:55.910420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.189 22:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:37.189 22:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:18:37.189 22:01:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:37.189 22:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:37.189 22:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:37.189 22:01:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.189 22:01:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:37.189 22:01:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:37.189 22:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.189 22:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:37.189 [2024-07-13 22:01:56.519852] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.189 22:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.189 22:01:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:37.189 22:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.189 22:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:37.189 22:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.189 22:01:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:37.189 22:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.189 22:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:37.189 [2024-07-13 22:01:56.536065] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.189 22:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.189 22:01:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:37.189 22:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.189 22:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:37.189 22:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.189 22:01:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:37.189 22:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.189 22:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:37.448 malloc0 00:18:37.448 22:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.448 22:01:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:37.448 22:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.448 22:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:37.448 22:01:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.448 22:01:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:37.448 22:01:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:37.448 22:01:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:37.448 22:01:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:37.448 22:01:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:37.448 22:01:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:37.448 { 00:18:37.448 "params": { 00:18:37.448 "name": "Nvme$subsystem", 00:18:37.448 "trtype": "$TEST_TRANSPORT", 00:18:37.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:37.448 "adrfam": "ipv4", 00:18:37.448 "trsvcid": "$NVMF_PORT", 00:18:37.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:37.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:37.448 "hdgst": ${hdgst:-false}, 00:18:37.448 "ddgst": ${ddgst:-false} 00:18:37.448 }, 00:18:37.448 "method": "bdev_nvme_attach_controller" 00:18:37.448 } 00:18:37.448 EOF 00:18:37.448 )") 00:18:37.448 22:01:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:37.448 22:01:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:37.448 22:01:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:37.448 22:01:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:37.448 "params": { 00:18:37.448 "name": "Nvme1", 00:18:37.448 "trtype": "tcp", 00:18:37.448 "traddr": "10.0.0.2", 00:18:37.448 "adrfam": "ipv4", 00:18:37.448 "trsvcid": "4420", 00:18:37.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:37.448 "hdgst": false, 00:18:37.448 "ddgst": false 00:18:37.448 }, 00:18:37.448 "method": "bdev_nvme_attach_controller" 00:18:37.448 }' 00:18:37.448 [2024-07-13 22:01:56.706605] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:37.448 [2024-07-13 22:01:56.706752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4067059 ] 00:18:37.448 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.448 [2024-07-13 22:01:56.837433] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.706 [2024-07-13 22:01:57.086004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.272 Running I/O for 10 seconds... 00:18:48.250 00:18:48.250 Latency(us) 00:18:48.250 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.250 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:48.250 Verification LBA range: start 0x0 length 0x1000 00:18:48.250 Nvme1n1 : 10.02 4332.47 33.85 0.00 0.00 29463.57 4660.34 38836.15 00:18:48.250 =================================================================================================================== 00:18:48.250 Total : 4332.47 33.85 0.00 0.00 29463.57 4660.34 38836.15 00:18:49.626 22:02:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=4068495 00:18:49.626 22:02:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:49.626 22:02:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:49.626 22:02:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:49.626 22:02:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:49.626 22:02:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:49.626 22:02:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:49.626 22:02:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:49.626 22:02:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:49.626 { 00:18:49.626 "params": { 00:18:49.626 "name": "Nvme$subsystem", 00:18:49.626 "trtype": "$TEST_TRANSPORT", 00:18:49.626 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:49.626 "adrfam": "ipv4", 00:18:49.626 "trsvcid": "$NVMF_PORT", 00:18:49.626 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:49.626 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:49.626 "hdgst": ${hdgst:-false}, 00:18:49.626 "ddgst": ${ddgst:-false} 00:18:49.626 }, 00:18:49.626 "method": "bdev_nvme_attach_controller" 00:18:49.626 } 00:18:49.626 EOF 00:18:49.626 )") 00:18:49.626 22:02:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:49.626 [2024-07-13 22:02:08.684702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.626 [2024-07-13 22:02:08.684763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 22:02:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:49.627 22:02:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:49.627 22:02:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:49.627 "params": { 00:18:49.627 "name": "Nvme1", 00:18:49.627 "trtype": "tcp", 00:18:49.627 "traddr": "10.0.0.2", 00:18:49.627 "adrfam": "ipv4", 00:18:49.627 "trsvcid": "4420", 00:18:49.627 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.627 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:49.627 "hdgst": false, 00:18:49.627 "ddgst": false 00:18:49.627 }, 00:18:49.627 "method": "bdev_nvme_attach_controller" 00:18:49.627 }' 00:18:49.627 [2024-07-13 22:02:08.692589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.692624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.700617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.700652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.708627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.708661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.716617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.716648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.724649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.724678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.732691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.732725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.740696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.740729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.748757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.748790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.756736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.756769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.759215] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:49.627 [2024-07-13 22:02:08.759370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4068495 ] 00:18:49.627 [2024-07-13 22:02:08.764785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.764818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.772800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.772833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.780810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.780843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.788847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.788891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.796887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.796942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.804864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.804926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.812940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.812969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.820936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.820964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.828951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.828979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 EAL: No free 2048 kB hugepages reported on node 1 00:18:49.627 [2024-07-13 22:02:08.836992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.837021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.845028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.845058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.853035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.853065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.861050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.861089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.869044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.869072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.877085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.877113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.885106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.885135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.891802] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.627 [2024-07-13 22:02:08.893131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.893177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.901224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.901274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.909246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.909300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.917230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.917264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.925246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.925280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.933250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.933283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.627 [2024-07-13 22:02:08.941319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.627 [2024-07-13 22:02:08.941352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.628 [2024-07-13 22:02:08.949300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.628 [2024-07-13 22:02:08.949334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.628 [2024-07-13 22:02:08.957341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.628 [2024-07-13 22:02:08.957375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.628 [2024-07-13 22:02:08.965366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.628 [2024-07-13 22:02:08.965400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.628 [2024-07-13 22:02:08.973375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.628 [2024-07-13 22:02:08.973408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.628 [2024-07-13 22:02:08.981415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.628 [2024-07-13 22:02:08.981448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.628 [2024-07-13 22:02:08.989469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.628 [2024-07-13 22:02:08.989502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.628 [2024-07-13 22:02:08.997445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.628 [2024-07-13 22:02:08.997478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.628 [2024-07-13 22:02:09.005481] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.628 [2024-07-13 22:02:09.005515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.628 [2024-07-13 22:02:09.013487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.628 [2024-07-13 22:02:09.013520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.887 [2024-07-13 22:02:09.021573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.887 [2024-07-13 22:02:09.021622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.887 [2024-07-13 22:02:09.029648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.887 [2024-07-13 22:02:09.029710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.887 [2024-07-13 22:02:09.037595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.887 [2024-07-13 22:02:09.037631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.887 [2024-07-13 22:02:09.045601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.887 [2024-07-13 22:02:09.045642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.887 [2024-07-13 22:02:09.053636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.887 [2024-07-13 22:02:09.053670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.887 [2024-07-13 22:02:09.061629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.887 [2024-07-13 22:02:09.061662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.887 [2024-07-13 22:02:09.069672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.887 [2024-07-13 22:02:09.069705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.887 [2024-07-13 22:02:09.077676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.887 [2024-07-13 22:02:09.077709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.887 [2024-07-13 22:02:09.085716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.887 [2024-07-13 22:02:09.085749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.887 [2024-07-13 22:02:09.093740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.887 [2024-07-13 22:02:09.093774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.887 [2024-07-13 22:02:09.101743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.887 [2024-07-13 22:02:09.101776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.887 [2024-07-13 22:02:09.109791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.887 [2024-07-13 22:02:09.109824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.887 [2024-07-13 22:02:09.117808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.887 [2024-07-13 22:02:09.117843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.887 [2024-07-13 22:02:09.125809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.887 [2024-07-13 22:02:09.125842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.887 [2024-07-13 22:02:09.133888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.887 [2024-07-13 22:02:09.133934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.887 [2024-07-13 22:02:09.141714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.887 [2024-07-13 22:02:09.141914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.887 [2024-07-13 22:02:09.141943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.887 [2024-07-13 22:02:09.149922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.887 [2024-07-13 22:02:09.149952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.887 [2024-07-13 22:02:09.158007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.887 [2024-07-13 22:02:09.158051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.887 [2024-07-13 22:02:09.166023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.887 [2024-07-13 22:02:09.166073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.887 [2024-07-13 22:02:09.174007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.887 [2024-07-13 22:02:09.174053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.887 [2024-07-13 22:02:09.182010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.887 [2024-07-13 22:02:09.182039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.887 [2024-07-13 22:02:09.190002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.887 [2024-07-13 22:02:09.190031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.887 [2024-07-13 22:02:09.198034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.887 [2024-07-13 22:02:09.198062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.887 [2024-07-13 22:02:09.206063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.887 [2024-07-13 22:02:09.206092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.887 [2024-07-13 22:02:09.214088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.887 [2024-07-13 22:02:09.214118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.887 [2024-07-13 22:02:09.222103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.887 [2024-07-13 22:02:09.222131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.887 [2024-07-13 22:02:09.230199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.887 [2024-07-13 22:02:09.230246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.888 [2024-07-13 22:02:09.238256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.888 [2024-07-13 22:02:09.238311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.888 [2024-07-13 22:02:09.246278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.888 [2024-07-13 22:02:09.246335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.888 [2024-07-13 22:02:09.254300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.888 [2024-07-13 22:02:09.254356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.888 [2024-07-13 22:02:09.262312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.888 [2024-07-13 22:02:09.262361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.888 [2024-07-13 22:02:09.270253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.888 [2024-07-13 22:02:09.270286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.888 [2024-07-13 22:02:09.278372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.888 [2024-07-13 22:02:09.278412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.147 [2024-07-13 22:02:09.286332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.147 [2024-07-13 22:02:09.286370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.147 [2024-07-13 22:02:09.294324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.147 [2024-07-13 22:02:09.294359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.147 [2024-07-13 22:02:09.302355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.147 [2024-07-13 22:02:09.302389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.147 [2024-07-13 22:02:09.310379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.147 [2024-07-13 22:02:09.310413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.147 [2024-07-13 22:02:09.318383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.147 [2024-07-13 22:02:09.318416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.147 [2024-07-13 22:02:09.326443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.147 [2024-07-13 22:02:09.326477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.147 [2024-07-13 22:02:09.334430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.147 [2024-07-13 22:02:09.334463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.147 [2024-07-13 22:02:09.342473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.147 [2024-07-13 22:02:09.342506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.147 [2024-07-13 22:02:09.350500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.147 [2024-07-13 22:02:09.350533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.147 [2024-07-13 22:02:09.358508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.147 [2024-07-13 22:02:09.358541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.147 [2024-07-13 22:02:09.366554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.147 [2024-07-13 22:02:09.366587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.147 [2024-07-13 22:02:09.374621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.147 [2024-07-13 22:02:09.374666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.147 [2024-07-13 22:02:09.382582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.147 [2024-07-13 22:02:09.382618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.147 [2024-07-13 22:02:09.390699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.147 [2024-07-13 22:02:09.390756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.147 [2024-07-13 22:02:09.398696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.147 [2024-07-13 22:02:09.398756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.147 [2024-07-13 22:02:09.406686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.147 [2024-07-13 22:02:09.406725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.147 [2024-07-13 22:02:09.414685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.147 [2024-07-13 22:02:09.414719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.147 [2024-07-13 22:02:09.422720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.147 [2024-07-13 22:02:09.422753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.147 [2024-07-13 22:02:09.430754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.147 [2024-07-13 22:02:09.430788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.147 [2024-07-13 22:02:09.438763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.147 [2024-07-13 22:02:09.438796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.147 [2024-07-13 22:02:09.446760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.147 [2024-07-13 22:02:09.446792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.147 [2024-07-13 22:02:09.454827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.147 [2024-07-13 22:02:09.454861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.147 [2024-07-13 22:02:09.462802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.147 [2024-07-13 22:02:09.462834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.147 [2024-07-13 22:02:09.470843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.147 [2024-07-13 22:02:09.470885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.147 [2024-07-13 22:02:09.478883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.147 [2024-07-13 22:02:09.478940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.147 [2024-07-13 22:02:09.486887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.148 [2024-07-13 22:02:09.486933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.148 [2024-07-13 22:02:09.494947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.148 [2024-07-13 22:02:09.494976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.148 [2024-07-13 22:02:09.502964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.148 [2024-07-13 22:02:09.502993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.148 [2024-07-13 22:02:09.510977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.148 [2024-07-13 22:02:09.511017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.148 [2024-07-13 22:02:09.519091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.148 [2024-07-13 22:02:09.519122] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.148 [2024-07-13 22:02:09.527040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.148 [2024-07-13 22:02:09.527072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.148 [2024-07-13 22:02:09.535130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.148 [2024-07-13 22:02:09.535164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.407 [2024-07-13 22:02:09.543137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.407 [2024-07-13 22:02:09.543191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.407 [2024-07-13 22:02:09.551142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.407 [2024-07-13 22:02:09.551195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.407 [2024-07-13 22:02:09.559189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.407 [2024-07-13 22:02:09.559226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.407 [2024-07-13 22:02:09.567272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.407 [2024-07-13 22:02:09.567311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.407 [2024-07-13 22:02:09.575233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.407 [2024-07-13 22:02:09.575272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.407 [2024-07-13 22:02:09.583273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.407 [2024-07-13 22:02:09.583309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.407 Running I/O for 5 seconds... 00:18:50.407 [2024-07-13 22:02:09.600863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.407 [2024-07-13 22:02:09.600912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.407 [2024-07-13 22:02:09.616294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.407 [2024-07-13 22:02:09.616336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.407 [2024-07-13 22:02:09.631617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.407 [2024-07-13 22:02:09.631659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.407 [2024-07-13 22:02:09.646629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.407 [2024-07-13 22:02:09.646670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.407 [2024-07-13 22:02:09.661961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.407 [2024-07-13 22:02:09.662002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.407 [2024-07-13 22:02:09.674431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.407 [2024-07-13 22:02:09.674472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.407 [2024-07-13 22:02:09.689794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.407 [2024-07-13 22:02:09.689835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.407 [2024-07-13 22:02:09.705009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.407 [2024-07-13 22:02:09.705053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.407 [2024-07-13 22:02:09.720107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.407 [2024-07-13 22:02:09.720148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.407 [2024-07-13 22:02:09.735159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.407 [2024-07-13 22:02:09.735209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.407 [2024-07-13 22:02:09.750270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.407 [2024-07-13 22:02:09.750311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.407 [2024-07-13 22:02:09.765915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.407 [2024-07-13 22:02:09.765951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.407 [2024-07-13 22:02:09.780701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.407 [2024-07-13 22:02:09.780740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.407 [2024-07-13 22:02:09.795570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.407 [2024-07-13 22:02:09.795621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.666 [2024-07-13 22:02:09.811653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.666 [2024-07-13 22:02:09.811694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.666 [2024-07-13 22:02:09.827435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.666 [2024-07-13 22:02:09.827475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.666 [2024-07-13 22:02:09.842204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.666 [2024-07-13 22:02:09.842244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.666 [2024-07-13 22:02:09.857578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.666 [2024-07-13 22:02:09.857618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.666 [2024-07-13 22:02:09.872250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.666 [2024-07-13 22:02:09.872289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.666 [2024-07-13 22:02:09.886909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.666 [2024-07-13 22:02:09.886945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.666 [2024-07-13 22:02:09.901879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.666 [2024-07-13 22:02:09.901932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.666 [2024-07-13 22:02:09.917493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.666 [2024-07-13 22:02:09.917533] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.666 [2024-07-13 22:02:09.932440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.666 [2024-07-13 22:02:09.932480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.666 [2024-07-13 22:02:09.947465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.666 [2024-07-13 22:02:09.947505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.666 [2024-07-13 22:02:09.962638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.666 [2024-07-13 22:02:09.962678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.666 [2024-07-13 22:02:09.977833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.666 [2024-07-13 22:02:09.977883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.666 [2024-07-13 22:02:09.993046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.666 [2024-07-13 22:02:09.993090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.666 [2024-07-13 22:02:10.008461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.666 [2024-07-13 22:02:10.008501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.666 [2024-07-13 22:02:10.024233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.666 [2024-07-13 22:02:10.024280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.666 [2024-07-13 22:02:10.039341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.666 [2024-07-13 22:02:10.039394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.666 [2024-07-13 22:02:10.054446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.666 [2024-07-13 22:02:10.054495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.925 [2024-07-13 22:02:10.070518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.925 [2024-07-13 22:02:10.070565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.925 [2024-07-13 22:02:10.085726] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.925 [2024-07-13 22:02:10.085765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.925 [2024-07-13 22:02:10.100496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.925 [2024-07-13 22:02:10.100531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.925 [2024-07-13 22:02:10.115228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.925 [2024-07-13 22:02:10.115262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.925 [2024-07-13 22:02:10.130674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.925 [2024-07-13 22:02:10.130714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.925 [2024-07-13 22:02:10.145324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.925 [2024-07-13 22:02:10.145359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.925 [2024-07-13 22:02:10.159619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.925 [2024-07-13 22:02:10.159653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.925 [2024-07-13 22:02:10.174162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.925 [2024-07-13 22:02:10.174202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.925 [2024-07-13 22:02:10.188610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.925 [2024-07-13 22:02:10.188651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.925 [2024-07-13 22:02:10.203280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.925 [2024-07-13 22:02:10.203320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.925 [2024-07-13 22:02:10.217841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.925 [2024-07-13 22:02:10.217890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.925 [2024-07-13 22:02:10.232961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.925 [2024-07-13 22:02:10.233001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.925 [2024-07-13 22:02:10.247628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.925 [2024-07-13 22:02:10.247669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.925 [2024-07-13 22:02:10.262444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.925 [2024-07-13 22:02:10.262483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.925 [2024-07-13 22:02:10.277125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.925 [2024-07-13 22:02:10.277174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.925 [2024-07-13 22:02:10.291690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.925 [2024-07-13 22:02:10.291730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.925 [2024-07-13 22:02:10.306490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.925 [2024-07-13 22:02:10.306530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.184 [2024-07-13 22:02:10.321937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.184 [2024-07-13 22:02:10.321979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.184 [2024-07-13 22:02:10.336761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.184 [2024-07-13 22:02:10.336802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.184 [2024-07-13 22:02:10.351803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.184 [2024-07-13 22:02:10.351844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.184 [2024-07-13 22:02:10.366537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.184 [2024-07-13 22:02:10.366578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.184 [2024-07-13 22:02:10.381450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.184 [2024-07-13 22:02:10.381493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.184 [2024-07-13 22:02:10.396213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.184 [2024-07-13 22:02:10.396254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.184 [2024-07-13 22:02:10.410711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.184 [2024-07-13 22:02:10.410751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.184 [2024-07-13 22:02:10.425390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.184 [2024-07-13 22:02:10.425430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.184 [2024-07-13 22:02:10.440741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.184 [2024-07-13 22:02:10.440781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.184 [2024-07-13 22:02:10.456015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.184 [2024-07-13 22:02:10.456055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.184 [2024-07-13 22:02:10.470906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.184 [2024-07-13 22:02:10.470946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.184 [2024-07-13 22:02:10.486636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.184 [2024-07-13 22:02:10.486677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.184 [2024-07-13 22:02:10.501782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.184 [2024-07-13 22:02:10.501824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.184 [2024-07-13 22:02:10.517004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.184 [2024-07-13 22:02:10.517045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.184 [2024-07-13 22:02:10.531960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.184 [2024-07-13 22:02:10.532000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.184 [2024-07-13 22:02:10.546935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.184 [2024-07-13 22:02:10.546976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.184 [2024-07-13 22:02:10.561602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.184 [2024-07-13 22:02:10.561650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.184 [2024-07-13 22:02:10.577501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.184 [2024-07-13 22:02:10.577542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.444 [2024-07-13 22:02:10.592797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.444 [2024-07-13 22:02:10.592837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.444 [2024-07-13 22:02:10.606587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.444 [2024-07-13 22:02:10.606627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.444 [2024-07-13 22:02:10.620719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.444 [2024-07-13 22:02:10.620758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.444 [2024-07-13 22:02:10.635323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.444 [2024-07-13 22:02:10.635363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.444 [2024-07-13 22:02:10.651143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.444 [2024-07-13 22:02:10.651184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.444 [2024-07-13 22:02:10.665787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.444 [2024-07-13 22:02:10.665827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.444 [2024-07-13 22:02:10.680182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.444 [2024-07-13 22:02:10.680222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.444 [2024-07-13 22:02:10.694725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.444 [2024-07-13 22:02:10.694766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.444 [2024-07-13 22:02:10.709359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.444 [2024-07-13 22:02:10.709399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.444 [2024-07-13 22:02:10.724162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.444 [2024-07-13 22:02:10.724201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.444 [2024-07-13 22:02:10.739577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.444 [2024-07-13 22:02:10.739617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.444 [2024-07-13 22:02:10.754120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.444 [2024-07-13 22:02:10.754160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.444 [2024-07-13 22:02:10.768265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.444 [2024-07-13 22:02:10.768305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.444 [2024-07-13 22:02:10.783029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.444 [2024-07-13 22:02:10.783070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.444 [2024-07-13 22:02:10.798049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.444 [2024-07-13 22:02:10.798089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.444 [2024-07-13 22:02:10.812931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.444 [2024-07-13 22:02:10.812971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.444 [2024-07-13 22:02:10.827520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.444 [2024-07-13 22:02:10.827560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.704 [2024-07-13 22:02:10.842742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.704 [2024-07-13 22:02:10.842783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.704 [2024-07-13 22:02:10.857798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.704 [2024-07-13 22:02:10.857839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.704 [2024-07-13 22:02:10.872316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.704 [2024-07-13 22:02:10.872357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.704 [2024-07-13 22:02:10.887579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.704 [2024-07-13 22:02:10.887619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.704 [2024-07-13 22:02:10.902387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.704 [2024-07-13 22:02:10.902427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.704 [2024-07-13 22:02:10.916774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.704 [2024-07-13 22:02:10.916814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.704 [2024-07-13 22:02:10.931520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.704 [2024-07-13 22:02:10.931560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.704 [2024-07-13 22:02:10.945752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.704 [2024-07-13 22:02:10.945792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.704 [2024-07-13 22:02:10.960969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.704 [2024-07-13 22:02:10.961016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.704 [2024-07-13 22:02:10.976237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.704 [2024-07-13 22:02:10.976277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.704 [2024-07-13 22:02:10.988779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.704 [2024-07-13 22:02:10.988820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.704 [2024-07-13 22:02:11.003251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.704 [2024-07-13 22:02:11.003291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.704 [2024-07-13 22:02:11.017743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.705 [2024-07-13 22:02:11.017784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.705 [2024-07-13 22:02:11.032163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.705 [2024-07-13 22:02:11.032203] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.705 [2024-07-13 22:02:11.047376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.705 [2024-07-13 22:02:11.047415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.705 [2024-07-13 22:02:11.062274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.705 [2024-07-13 22:02:11.062314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.705 [2024-07-13 22:02:11.077885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.705 [2024-07-13 22:02:11.077936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.705 [2024-07-13 22:02:11.093593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.705 [2024-07-13 22:02:11.093634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.965 [2024-07-13 22:02:11.109543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.965 [2024-07-13 22:02:11.109584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.965 [2024-07-13 22:02:11.124759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.965 [2024-07-13 22:02:11.124798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.965 [2024-07-13 22:02:11.139551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.965 [2024-07-13 22:02:11.139592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.965 [2024-07-13 22:02:11.154728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.965 [2024-07-13 22:02:11.154769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.965 [2024-07-13 22:02:11.169257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.965 [2024-07-13 22:02:11.169297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.965 [2024-07-13 22:02:11.184304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.965 [2024-07-13 22:02:11.184345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.965 [2024-07-13 22:02:11.199330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.965 [2024-07-13 22:02:11.199371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.965 [2024-07-13 22:02:11.215043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.965 [2024-07-13 22:02:11.215084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.965 [2024-07-13 22:02:11.229956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.965 [2024-07-13 22:02:11.229996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.965 [2024-07-13 22:02:11.244385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.965 [2024-07-13 22:02:11.244425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.965 [2024-07-13 22:02:11.259193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.965 [2024-07-13 22:02:11.259234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.965 [2024-07-13 22:02:11.273540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.965 [2024-07-13 22:02:11.273581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.965 [2024-07-13 22:02:11.288187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.965 [2024-07-13 22:02:11.288227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.965 [2024-07-13 22:02:11.302107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.965 [2024-07-13 22:02:11.302148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.965 [2024-07-13 22:02:11.316532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.965 [2024-07-13 22:02:11.316573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.965 [2024-07-13 22:02:11.330850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.965 [2024-07-13 22:02:11.330900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:51.965 [2024-07-13 22:02:11.345557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:51.965 [2024-07-13 22:02:11.345599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.225 [2024-07-13 22:02:11.360442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.225 [2024-07-13 22:02:11.360484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.225 [2024-07-13 22:02:11.375449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.225 [2024-07-13 22:02:11.375489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.225 [2024-07-13 22:02:11.390468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.225 [2024-07-13 22:02:11.390508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.225 [2024-07-13 22:02:11.405768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.225 [2024-07-13 22:02:11.405809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.225 [2024-07-13 22:02:11.421469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.225 [2024-07-13 22:02:11.421510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.225 [2024-07-13 22:02:11.436832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.225 [2024-07-13 22:02:11.436883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.225 [2024-07-13 22:02:11.451473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.225 [2024-07-13 22:02:11.451513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.225 [2024-07-13 22:02:11.466677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.225 [2024-07-13 22:02:11.466717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.225 [2024-07-13 22:02:11.481574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.225 [2024-07-13 22:02:11.481615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.225 [2024-07-13 22:02:11.496658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.225 [2024-07-13 22:02:11.496699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.225 [2024-07-13 22:02:11.512116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.225 [2024-07-13 22:02:11.512157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.225 [2024-07-13 22:02:11.527538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.225 [2024-07-13 22:02:11.527579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.225 [2024-07-13 22:02:11.542043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.225 [2024-07-13 22:02:11.542085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.225 [2024-07-13 22:02:11.557335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.225 [2024-07-13 22:02:11.557375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.225 [2024-07-13 22:02:11.572279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.225 [2024-07-13 22:02:11.572320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.225 [2024-07-13 22:02:11.587247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.225 [2024-07-13 22:02:11.587288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.225 [2024-07-13 22:02:11.602649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.225 [2024-07-13 22:02:11.602690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.225 [2024-07-13 22:02:11.618297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.484 [2024-07-13 22:02:11.618338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.484 [2024-07-13 22:02:11.633833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.484 [2024-07-13 22:02:11.633888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.484 [2024-07-13 22:02:11.648182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.484 [2024-07-13 22:02:11.648221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.484 [2024-07-13 22:02:11.663097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.484 [2024-07-13 22:02:11.663138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.484 [2024-07-13 22:02:11.678030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.484 [2024-07-13 22:02:11.678079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.484 [2024-07-13 22:02:11.692609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.484 [2024-07-13 22:02:11.692649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.484 [2024-07-13 22:02:11.708009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.484 [2024-07-13 22:02:11.708050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.484 [2024-07-13 22:02:11.723302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.484 [2024-07-13 22:02:11.723342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.484 [2024-07-13 22:02:11.737389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.484 [2024-07-13 22:02:11.737430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.484 [2024-07-13 22:02:11.752167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.484 [2024-07-13 22:02:11.752207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.484 [2024-07-13 22:02:11.766905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.484 [2024-07-13 22:02:11.766945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.484 [2024-07-13 22:02:11.781211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.484 [2024-07-13 22:02:11.781252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.484 [2024-07-13 22:02:11.796036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.484 [2024-07-13 22:02:11.796076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.484 [2024-07-13 22:02:11.811619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.484 [2024-07-13 22:02:11.811659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.484 [2024-07-13 22:02:11.827012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.484 [2024-07-13 22:02:11.827053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.484 [2024-07-13 22:02:11.841857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.484 [2024-07-13 22:02:11.841911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.484 [2024-07-13 22:02:11.856078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.484 [2024-07-13 22:02:11.856119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.484 [2024-07-13 22:02:11.870969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.484 [2024-07-13 22:02:11.871008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.743 [2024-07-13 22:02:11.886515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.743 [2024-07-13 22:02:11.886556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.743 [2024-07-13 22:02:11.901897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.743 [2024-07-13 22:02:11.901937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.743 [2024-07-13 22:02:11.917236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.743 [2024-07-13 22:02:11.917277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.743 [2024-07-13 22:02:11.931919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.743 [2024-07-13 22:02:11.931959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.743 [2024-07-13 22:02:11.947152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.743 [2024-07-13 22:02:11.947192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.743 [2024-07-13 22:02:11.961888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.743 [2024-07-13 22:02:11.961948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.743 [2024-07-13 22:02:11.977458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.743 [2024-07-13 22:02:11.977499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.743 [2024-07-13 22:02:11.993280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.743 [2024-07-13 22:02:11.993320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.743 [2024-07-13 22:02:12.009169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.743 [2024-07-13 22:02:12.009210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.743 [2024-07-13 22:02:12.025040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.743 [2024-07-13 22:02:12.025082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.743 [2024-07-13 22:02:12.040434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.743 [2024-07-13 22:02:12.040475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.743 [2024-07-13 22:02:12.055794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.743 [2024-07-13 22:02:12.055835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.743 [2024-07-13 22:02:12.070637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.743 [2024-07-13 22:02:12.070678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.743 [2024-07-13 22:02:12.085251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.743 [2024-07-13 22:02:12.085293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.743 [2024-07-13 22:02:12.100376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.743 [2024-07-13 22:02:12.100417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.743 [2024-07-13 22:02:12.115536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.743 [2024-07-13 22:02:12.115576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:52.743 [2024-07-13 22:02:12.130316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:52.744 [2024-07-13 22:02:12.130357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.003 [2024-07-13 22:02:12.145512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.003 [2024-07-13 22:02:12.145555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.003 [2024-07-13 22:02:12.160240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.003 [2024-07-13 22:02:12.160282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.003 [2024-07-13 22:02:12.175569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.003 [2024-07-13 22:02:12.175610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.003 [2024-07-13 22:02:12.190697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.003 [2024-07-13 22:02:12.190739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.003 [2024-07-13 22:02:12.206214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.003 [2024-07-13 22:02:12.206256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.003 [2024-07-13 22:02:12.221506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.003 [2024-07-13 22:02:12.221549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.003 [2024-07-13 22:02:12.237108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.003 [2024-07-13 22:02:12.237149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.003 [2024-07-13 22:02:12.252215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.003 [2024-07-13 22:02:12.252266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.003 [2024-07-13 22:02:12.267237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.003 [2024-07-13 22:02:12.267278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.003 [2024-07-13 22:02:12.282855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.003 [2024-07-13 22:02:12.282906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.003 [2024-07-13 22:02:12.298126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.003 [2024-07-13 22:02:12.298167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.003 [2024-07-13 22:02:12.313579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.003 [2024-07-13 22:02:12.313621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.003 [2024-07-13 22:02:12.329378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.003 [2024-07-13 22:02:12.329425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.003 [2024-07-13 22:02:12.344308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.003 [2024-07-13 22:02:12.344349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.003 [2024-07-13 22:02:12.359343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.003 [2024-07-13 22:02:12.359385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.003 [2024-07-13 22:02:12.374670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.003 [2024-07-13 22:02:12.374712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.003 [2024-07-13 22:02:12.390289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.003 [2024-07-13 22:02:12.390331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.266 [2024-07-13 22:02:12.406209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.266 [2024-07-13 22:02:12.406253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.266 [2024-07-13 22:02:12.421380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.266 [2024-07-13 22:02:12.421421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.266 [2024-07-13 22:02:12.437236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.266 [2024-07-13 22:02:12.437278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.266 [2024-07-13 22:02:12.452650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.266 [2024-07-13 22:02:12.452692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.266 [2024-07-13 22:02:12.467834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.266 [2024-07-13 22:02:12.467884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.266 [2024-07-13 22:02:12.482385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.266 [2024-07-13 22:02:12.482426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.266 [2024-07-13 22:02:12.497180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.266 [2024-07-13 22:02:12.497221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.266 [2024-07-13 22:02:12.512808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.266 [2024-07-13 22:02:12.512851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.266 [2024-07-13 22:02:12.529232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.266 [2024-07-13 22:02:12.529276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.266 [2024-07-13 22:02:12.545593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.266 [2024-07-13 22:02:12.545643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.266 [2024-07-13 22:02:12.560682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.266 [2024-07-13 22:02:12.560724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.266 [2024-07-13 22:02:12.576040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.266 [2024-07-13 22:02:12.576082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.266 [2024-07-13 22:02:12.588056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.266 [2024-07-13 22:02:12.588098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.266 [2024-07-13 22:02:12.603345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.266 [2024-07-13 22:02:12.603387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.266 [2024-07-13 22:02:12.618497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.266 [2024-07-13 22:02:12.618548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.266 [2024-07-13 22:02:12.634213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.266 [2024-07-13 22:02:12.634255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.266 [2024-07-13 22:02:12.649246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.266 [2024-07-13 22:02:12.649288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.525 [2024-07-13 22:02:12.665351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.525 [2024-07-13 22:02:12.665395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.525 [2024-07-13 22:02:12.680804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.525 [2024-07-13 22:02:12.680847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.525 [2024-07-13 22:02:12.695894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.525 [2024-07-13 22:02:12.695941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.525 [2024-07-13 22:02:12.711394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.525 [2024-07-13 22:02:12.711435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.525 [2024-07-13 22:02:12.727073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.525 [2024-07-13 22:02:12.727115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.525 [2024-07-13 22:02:12.742023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.525 [2024-07-13 22:02:12.742064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.525 [2024-07-13 22:02:12.757266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.525 [2024-07-13 22:02:12.757307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.525 [2024-07-13 22:02:12.773117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.525 [2024-07-13 22:02:12.773158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.525 [2024-07-13 22:02:12.787327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.525 [2024-07-13 22:02:12.787367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.525 [2024-07-13 22:02:12.802749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.525 [2024-07-13 22:02:12.802790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.526 [2024-07-13 22:02:12.817821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.526 [2024-07-13 22:02:12.817860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.526 [2024-07-13 22:02:12.832770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.526 [2024-07-13 22:02:12.832818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.526 [2024-07-13 22:02:12.847544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.526 [2024-07-13 22:02:12.847585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.526 [2024-07-13 22:02:12.861765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.526 [2024-07-13 22:02:12.861805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.526 [2024-07-13 22:02:12.876713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.526 [2024-07-13 22:02:12.876754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.526 [2024-07-13 22:02:12.890673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.526 [2024-07-13 22:02:12.890714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.526 [2024-07-13 22:02:12.905588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.526 [2024-07-13 22:02:12.905628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.784 [2024-07-13 22:02:12.920845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.784 [2024-07-13 22:02:12.920898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.784 [2024-07-13 22:02:12.936008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.784 [2024-07-13 22:02:12.936049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.784 [2024-07-13 22:02:12.950685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.784 [2024-07-13 22:02:12.950727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.784 [2024-07-13 22:02:12.965366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.785 [2024-07-13 22:02:12.965407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.785 [2024-07-13 22:02:12.980172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.785 [2024-07-13 22:02:12.980213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.785 [2024-07-13 22:02:12.995402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.785 [2024-07-13 22:02:12.995443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.785 [2024-07-13 22:02:13.010862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.785 [2024-07-13 22:02:13.010915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.785 [2024-07-13 22:02:13.025353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.785 [2024-07-13 22:02:13.025394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.785 [2024-07-13 22:02:13.040457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.785 [2024-07-13 22:02:13.040497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.785 [2024-07-13 22:02:13.055328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.785 [2024-07-13 22:02:13.055368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.785 [2024-07-13 22:02:13.070338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.785 [2024-07-13 22:02:13.070378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.785 [2024-07-13 22:02:13.085322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.785 [2024-07-13 22:02:13.085363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.785 [2024-07-13 22:02:13.100616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.785 [2024-07-13 22:02:13.100657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.785 [2024-07-13 22:02:13.115181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.785 [2024-07-13 22:02:13.115221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.785 [2024-07-13 22:02:13.130478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.785 [2024-07-13 22:02:13.130518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.785 [2024-07-13 22:02:13.144743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.785 [2024-07-13 22:02:13.144784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.785 [2024-07-13 22:02:13.159179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.785 [2024-07-13 22:02:13.159219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:53.785 [2024-07-13 22:02:13.173646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:53.785 [2024-07-13 22:02:13.173697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.054 [2024-07-13 22:02:13.189324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.054 [2024-07-13 22:02:13.189366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.054 [2024-07-13 22:02:13.203827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.054 [2024-07-13 22:02:13.203880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.054 [2024-07-13 22:02:13.218039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.054 [2024-07-13 22:02:13.218079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.054 [2024-07-13 22:02:13.232222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.054 [2024-07-13 22:02:13.232263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.054 [2024-07-13 22:02:13.246826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.054 [2024-07-13 22:02:13.246875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.054 [2024-07-13 22:02:13.261499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.054 [2024-07-13 22:02:13.261539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.054 [2024-07-13 22:02:13.276425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.054 [2024-07-13 22:02:13.276466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.054 [2024-07-13 22:02:13.291134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.054 [2024-07-13 22:02:13.291174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.054 [2024-07-13 22:02:13.305542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.054 [2024-07-13 22:02:13.305583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.054 [2024-07-13 22:02:13.320648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.054 [2024-07-13 22:02:13.320688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.054 [2024-07-13 22:02:13.334561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.054 [2024-07-13 22:02:13.334601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.054 [2024-07-13 22:02:13.349397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.054 [2024-07-13 22:02:13.349438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.054 [2024-07-13 22:02:13.364921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.054 [2024-07-13 22:02:13.364962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.054 [2024-07-13 22:02:13.379987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.054 [2024-07-13 22:02:13.380026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.054 [2024-07-13 22:02:13.395057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.054 [2024-07-13 22:02:13.395097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.054 [2024-07-13 22:02:13.410236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.054 [2024-07-13 22:02:13.410278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.054 [2024-07-13 22:02:13.425061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.054 [2024-07-13 22:02:13.425102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.054 [2024-07-13 22:02:13.440372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.054 [2024-07-13 22:02:13.440413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.312 [2024-07-13 22:02:13.455432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.312 [2024-07-13 22:02:13.455474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.312 [2024-07-13 22:02:13.470988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.312 [2024-07-13 22:02:13.471029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.312 [2024-07-13 22:02:13.486468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.312 [2024-07-13 22:02:13.486508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.312 [2024-07-13 22:02:13.501397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.312 [2024-07-13 22:02:13.501438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.312 [2024-07-13 22:02:13.516466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.312 [2024-07-13 22:02:13.516507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.312 [2024-07-13 22:02:13.531819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.312 [2024-07-13 22:02:13.531861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.312 [2024-07-13 22:02:13.546684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.312 [2024-07-13 22:02:13.546725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.312 [2024-07-13 22:02:13.561664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.312 [2024-07-13 22:02:13.561705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.312 [2024-07-13 22:02:13.577226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.312 [2024-07-13 22:02:13.577266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.312 [2024-07-13 22:02:13.591877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.312 [2024-07-13 22:02:13.591917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.312 [2024-07-13 22:02:13.605440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.312 [2024-07-13 22:02:13.605481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.312 [2024-07-13 22:02:13.620398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.312 [2024-07-13 22:02:13.620439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.312 [2024-07-13 22:02:13.635651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.312 [2024-07-13 22:02:13.635690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.312 [2024-07-13 22:02:13.650161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.312 [2024-07-13 22:02:13.650201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.312 [2024-07-13 22:02:13.665086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.312 [2024-07-13 22:02:13.665127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.312 [2024-07-13 22:02:13.680035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.312 [2024-07-13 22:02:13.680076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.312 [2024-07-13 22:02:13.694461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.312 [2024-07-13 22:02:13.694502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.571 [2024-07-13 22:02:13.710367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.571 [2024-07-13 22:02:13.710409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.571 [2024-07-13 22:02:13.725191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.571 [2024-07-13 22:02:13.725231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.571 [2024-07-13 22:02:13.740070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.571 [2024-07-13 22:02:13.740110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.571 [2024-07-13 22:02:13.755420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.571 [2024-07-13 22:02:13.755461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.571 [2024-07-13 22:02:13.770411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.571 [2024-07-13 22:02:13.770453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.571 [2024-07-13 22:02:13.785494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.571 [2024-07-13 22:02:13.785535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.571 [2024-07-13 22:02:13.800451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.571 [2024-07-13 22:02:13.800492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.571 [2024-07-13 22:02:13.815346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.571 [2024-07-13 22:02:13.815386] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.571 [2024-07-13 22:02:13.830392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.571 [2024-07-13 22:02:13.830432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.571 [2024-07-13 22:02:13.844817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.571 [2024-07-13 22:02:13.844857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.571 [2024-07-13 22:02:13.859919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.571 [2024-07-13 22:02:13.859958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.571 [2024-07-13 22:02:13.874766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.571 [2024-07-13 22:02:13.874806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.571 [2024-07-13 22:02:13.889451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.571 [2024-07-13 22:02:13.889490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.571 [2024-07-13 22:02:13.903926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.571 [2024-07-13 22:02:13.903966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.571 [2024-07-13 22:02:13.918266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.571 [2024-07-13 22:02:13.918306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.571 [2024-07-13 22:02:13.933671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.571 [2024-07-13 22:02:13.933711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.571 [2024-07-13 22:02:13.948223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.571 [2024-07-13 22:02:13.948271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.571 [2024-07-13 22:02:13.963540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.571 [2024-07-13 22:02:13.963581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.830 [2024-07-13 22:02:13.976583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.830 [2024-07-13 22:02:13.976623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.830 [2024-07-13 22:02:13.990674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.830 [2024-07-13 22:02:13.990715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.830 [2024-07-13 22:02:14.005353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.830 [2024-07-13 22:02:14.005393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.830 [2024-07-13 22:02:14.020101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.830 [2024-07-13 22:02:14.020141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.830 [2024-07-13 22:02:14.035236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.830 [2024-07-13 22:02:14.035276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.830 [2024-07-13 22:02:14.049988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.830 [2024-07-13 22:02:14.050027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.830 [2024-07-13 22:02:14.063937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.830 [2024-07-13 22:02:14.063977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.830 [2024-07-13 22:02:14.079024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.830 [2024-07-13 22:02:14.079065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.830 [2024-07-13 22:02:14.093819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.830 [2024-07-13 22:02:14.093859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.830 [2024-07-13 22:02:14.108619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.830 [2024-07-13 22:02:14.108659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.830 [2024-07-13 22:02:14.122825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.830 [2024-07-13 22:02:14.122873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.830 [2024-07-13 22:02:14.137394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.830 [2024-07-13 22:02:14.137434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.830 [2024-07-13 22:02:14.152032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.830 [2024-07-13 22:02:14.152071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.830 [2024-07-13 22:02:14.167297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.830 [2024-07-13 22:02:14.167337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.830 [2024-07-13 22:02:14.182048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.831 [2024-07-13 22:02:14.182088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.831 [2024-07-13 22:02:14.196500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.831 [2024-07-13 22:02:14.196540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:54.831 [2024-07-13 22:02:14.211035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:54.831 [2024-07-13 22:02:14.211075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.089 [2024-07-13 22:02:14.226759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.089 [2024-07-13 22:02:14.226807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.089 [2024-07-13 22:02:14.241769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.089 [2024-07-13 22:02:14.241810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.089 [2024-07-13 22:02:14.256523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.090 [2024-07-13 22:02:14.256562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.090 [2024-07-13 22:02:14.271190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.090 [2024-07-13 22:02:14.271229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.090 [2024-07-13 22:02:14.286086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.090 [2024-07-13 22:02:14.286125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.090 [2024-07-13 22:02:14.300496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.090 [2024-07-13 22:02:14.300535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.090 [2024-07-13 22:02:14.315464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.090 [2024-07-13 22:02:14.315504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.090 [2024-07-13 22:02:14.331039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.090 [2024-07-13 22:02:14.331079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.090 [2024-07-13 22:02:14.345891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.090 [2024-07-13 22:02:14.345931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.090 [2024-07-13 22:02:14.360789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.090 [2024-07-13 22:02:14.360829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.090 [2024-07-13 22:02:14.375433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.090 [2024-07-13 22:02:14.375473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.090 [2024-07-13 22:02:14.391466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.090 [2024-07-13 22:02:14.391506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.090 [2024-07-13 22:02:14.406382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.090 [2024-07-13 22:02:14.406422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.090 [2024-07-13 22:02:14.421571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.090 [2024-07-13 22:02:14.421611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.090 [2024-07-13 22:02:14.436247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.090 [2024-07-13 22:02:14.436287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.090 [2024-07-13 22:02:14.451471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.090 [2024-07-13 22:02:14.451510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.090 [2024-07-13 22:02:14.466330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.090 [2024-07-13 22:02:14.466370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.090 [2024-07-13 22:02:14.481584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.090 [2024-07-13 22:02:14.481626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.349 [2024-07-13 22:02:14.494712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.349 [2024-07-13 22:02:14.494752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.349 [2024-07-13 22:02:14.509022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.349 [2024-07-13 22:02:14.509072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.349 [2024-07-13 22:02:14.523218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.349 [2024-07-13 22:02:14.523259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.349 [2024-07-13 22:02:14.537599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.349 [2024-07-13 22:02:14.537640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.349 [2024-07-13 22:02:14.551635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.349 [2024-07-13 22:02:14.551676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.349 [2024-07-13 22:02:14.566313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.349 [2024-07-13 22:02:14.566353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.349 [2024-07-13 22:02:14.581189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.349 [2024-07-13 22:02:14.581230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.349 [2024-07-13 22:02:14.596213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.349 [2024-07-13 22:02:14.596253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.349 [2024-07-13 22:02:14.610137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.349 [2024-07-13 22:02:14.610176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.349 00:18:55.349 Latency(us) 00:18:55.349 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.349 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:55.349 Nvme1n1 : 5.01 8471.81 66.19 0.00 0.00 15082.23 6019.60 24078.41 00:18:55.349 =================================================================================================================== 00:18:55.349 Total : 8471.81 66.19 0.00 0.00 15082.23 6019.60 24078.41 00:18:55.349 [2024-07-13 22:02:14.615364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.349 [2024-07-13 22:02:14.615401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.349 [2024-07-13 22:02:14.623297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.349 [2024-07-13 22:02:14.623334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.349 [2024-07-13 22:02:14.635340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.349 [2024-07-13 22:02:14.635384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.349 [2024-07-13 22:02:14.643352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.349 [2024-07-13 22:02:14.643389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.349 [2024-07-13 22:02:14.651363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.349 [2024-07-13 22:02:14.651399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.349 [2024-07-13 22:02:14.659398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.349 [2024-07-13 22:02:14.659436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.349 [2024-07-13 22:02:14.667526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.349 [2024-07-13 22:02:14.667599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.349 [2024-07-13 22:02:14.675582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.349 [2024-07-13 22:02:14.675653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.349 [2024-07-13 22:02:14.683433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.349 [2024-07-13 22:02:14.683476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.349 [2024-07-13 22:02:14.691477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.349 [2024-07-13 22:02:14.691512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.349 [2024-07-13 22:02:14.699493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.349 [2024-07-13 22:02:14.699528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.349 [2024-07-13 22:02:14.707500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.349 [2024-07-13 22:02:14.707536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.349 [2024-07-13 22:02:14.715535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.349 [2024-07-13 22:02:14.715569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.349 [2024-07-13 22:02:14.723556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.349 [2024-07-13 22:02:14.723591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.349 [2024-07-13 22:02:14.731562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.349 [2024-07-13 22:02:14.731596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.349 [2024-07-13 22:02:14.739634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.349 [2024-07-13 22:02:14.739672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.609 [2024-07-13 22:02:14.747618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.609 [2024-07-13 22:02:14.747657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.609 [2024-07-13 22:02:14.755783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.609 [2024-07-13 22:02:14.755854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.609 [2024-07-13 22:02:14.763804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.609 [2024-07-13 22:02:14.763885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.609 [2024-07-13 22:02:14.771741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.609 [2024-07-13 22:02:14.771778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.609 [2024-07-13 22:02:14.779722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.609 [2024-07-13 22:02:14.779756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.609 [2024-07-13 22:02:14.787749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.609 [2024-07-13 22:02:14.787782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.609 [2024-07-13 22:02:14.795739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.609 [2024-07-13 22:02:14.795773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.609 [2024-07-13 22:02:14.803789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.609 [2024-07-13 22:02:14.803823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.609 [2024-07-13 22:02:14.811782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.609 [2024-07-13 22:02:14.811817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.609 [2024-07-13 22:02:14.819820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.609 [2024-07-13 22:02:14.819855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.609 [2024-07-13 22:02:14.827841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.609 [2024-07-13 22:02:14.827884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.609 [2024-07-13 22:02:14.835883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.609 [2024-07-13 22:02:14.835918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.609 [2024-07-13 22:02:14.843924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.609 [2024-07-13 22:02:14.843958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.609 [2024-07-13 22:02:14.851933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.609 [2024-07-13 22:02:14.851968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.609 [2024-07-13 22:02:14.859929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.609 [2024-07-13 22:02:14.859963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.609 [2024-07-13 22:02:14.867988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.609 [2024-07-13 22:02:14.868022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.609 [2024-07-13 22:02:14.875970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.609 [2024-07-13 22:02:14.876004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.609 [2024-07-13 22:02:14.884023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.609 [2024-07-13 22:02:14.884057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.609 [2024-07-13 22:02:14.892038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.609 [2024-07-13 22:02:14.892072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.609 [2024-07-13 22:02:14.900045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.609 [2024-07-13 22:02:14.900078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.609 [2024-07-13 22:02:14.908086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.609 [2024-07-13 22:02:14.908125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.609 [2024-07-13 22:02:14.916252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.609 [2024-07-13 22:02:14.916317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.609 [2024-07-13 22:02:14.924165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.609 [2024-07-13 22:02:14.924213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.609 [2024-07-13 22:02:14.932179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.609 [2024-07-13 22:02:14.932212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.609 [2024-07-13 22:02:14.940151] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.609 [2024-07-13 22:02:14.940184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.609 [2024-07-13 22:02:14.948198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.610 [2024-07-13 22:02:14.948231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.610 [2024-07-13 22:02:14.956247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.610 [2024-07-13 22:02:14.956282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.610 [2024-07-13 22:02:14.964224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.610 [2024-07-13 22:02:14.964258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.610 [2024-07-13 22:02:14.972263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.610 [2024-07-13 22:02:14.972298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.610 [2024-07-13 22:02:14.980454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.610 [2024-07-13 22:02:14.980525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.610 [2024-07-13 22:02:14.988436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.610 [2024-07-13 22:02:14.988508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.610 [2024-07-13 22:02:14.996505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.610 [2024-07-13 22:02:14.996574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.004452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.004519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.012408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.012446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.020415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.020450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.028437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.028471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.036456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.036490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.044469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.044503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.052475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.052509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.060528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.060563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.068524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.068558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.076560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.076594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.084585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.084619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.092589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.092623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.100636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.100670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.108649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.108684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.116653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.116687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.124694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.124727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.132717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.132751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.140745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.140779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.148765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.148799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.156767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.156801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.164889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.164945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.172964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.173034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.180840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.180885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.188886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.188919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.196890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.196923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.204943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.204976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.212966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.213000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.220987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.221021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.228997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.229031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.237021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.237054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.245033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.245067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.253071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.253105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:55.869 [2024-07-13 22:02:15.261078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:55.869 [2024-07-13 22:02:15.261117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.269117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.269155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.277128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.277165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.285146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.285181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.293227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.293274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.301346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.301413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.309213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.309247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.317270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.317304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.325251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.325284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.333292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.333325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.341333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.341366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.349323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.349357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.357373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.357407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.365390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.365424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.373386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.373418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.381430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.381462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.389474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.389521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.397600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.397668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.405496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.405531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.413529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.413563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.421537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.421570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.429565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.429598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.437562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.437602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.445611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.445645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.453604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.453637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.461651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.461685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.469682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.469716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.477670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.477703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.485711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.485745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.493735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.493768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.501837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.501907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.509954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.510023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.129 [2024-07-13 22:02:15.517813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.129 [2024-07-13 22:02:15.517850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.388 [2024-07-13 22:02:15.525845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.388 [2024-07-13 22:02:15.525899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.388 [2024-07-13 22:02:15.533861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.388 [2024-07-13 22:02:15.533917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.388 [2024-07-13 22:02:15.541857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.388 [2024-07-13 22:02:15.541901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.388 [2024-07-13 22:02:15.549929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.388 [2024-07-13 22:02:15.549964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.388 [2024-07-13 22:02:15.558101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.388 [2024-07-13 22:02:15.558174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.388 [2024-07-13 22:02:15.565949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.388 [2024-07-13 22:02:15.565984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.388 [2024-07-13 22:02:15.573993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.388 [2024-07-13 22:02:15.574034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.388 [2024-07-13 22:02:15.581992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.388 [2024-07-13 22:02:15.582025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.388 [2024-07-13 22:02:15.590040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.388 [2024-07-13 22:02:15.590081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.388 [2024-07-13 22:02:15.598039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.388 [2024-07-13 22:02:15.598072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.388 [2024-07-13 22:02:15.606076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.388 [2024-07-13 22:02:15.606110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.388 [2024-07-13 22:02:15.614100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.388 [2024-07-13 22:02:15.614133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.388 [2024-07-13 22:02:15.622105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.388 [2024-07-13 22:02:15.622138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.388 [2024-07-13 22:02:15.630110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.388 [2024-07-13 22:02:15.630143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.388 [2024-07-13 22:02:15.638150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.388 [2024-07-13 22:02:15.638183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.388 [2024-07-13 22:02:15.646166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.388 [2024-07-13 22:02:15.646200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.388 [2024-07-13 22:02:15.654200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.388 [2024-07-13 22:02:15.654235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.388 [2024-07-13 22:02:15.662212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.388 [2024-07-13 22:02:15.662242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.388 [2024-07-13 22:02:15.670215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.388 [2024-07-13 22:02:15.670243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.388 [2024-07-13 22:02:15.678255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.388 [2024-07-13 22:02:15.678283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.388 [2024-07-13 22:02:15.686303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.388 [2024-07-13 22:02:15.686338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.388 [2024-07-13 22:02:15.694302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.388 [2024-07-13 22:02:15.694331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.388 [2024-07-13 22:02:15.702338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.388 [2024-07-13 22:02:15.702366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.388 [2024-07-13 22:02:15.710334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.388 [2024-07-13 22:02:15.710364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4068495) - No such process 00:18:56.388 22:02:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 4068495 00:18:56.388 22:02:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:56.388 22:02:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.388 22:02:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:56.388 22:02:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.388 22:02:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:56.388 22:02:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.388 22:02:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:56.388 delay0 00:18:56.388 22:02:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.388 22:02:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:56.388 22:02:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.388 22:02:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:56.388 22:02:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.388 22:02:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:56.647 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.647 [2024-07-13 22:02:15.926030] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:03.245 Initializing NVMe Controllers 00:19:03.245 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:03.245 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:03.245 Initialization complete. Launching workers. 00:19:03.245 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 101 00:19:03.245 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 388, failed to submit 33 00:19:03.245 success 171, unsuccess 217, failed 0 00:19:03.245 22:02:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:19:03.245 22:02:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:19:03.245 22:02:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:03.245 22:02:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:19:03.245 22:02:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:03.245 22:02:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:19:03.245 22:02:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:03.245 22:02:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:03.245 rmmod nvme_tcp 00:19:03.245 rmmod nvme_fabrics 00:19:03.245 rmmod nvme_keyring 00:19:03.245 22:02:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:03.245 22:02:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:19:03.245 22:02:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:19:03.245 22:02:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 4066902 ']' 00:19:03.245 22:02:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 4066902 00:19:03.245 22:02:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 4066902 ']' 00:19:03.245 22:02:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 4066902 00:19:03.245 22:02:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:19:03.245 22:02:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:03.245 22:02:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4066902 00:19:03.245 22:02:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:03.245 22:02:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:03.245 22:02:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4066902' 00:19:03.245 killing process with pid 4066902 00:19:03.245 22:02:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 4066902 00:19:03.245 22:02:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 4066902 00:19:04.620 22:02:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:04.620 22:02:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:04.620 22:02:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:04.620 22:02:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:04.620 22:02:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:04.620 22:02:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.620 22:02:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:04.620 22:02:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.524 22:02:25 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:06.524 00:19:06.524 real 0m32.529s 00:19:06.524 user 0m48.902s 00:19:06.524 sys 0m8.243s 00:19:06.524 22:02:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:06.524 22:02:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:06.524 ************************************ 00:19:06.524 END TEST nvmf_zcopy 00:19:06.524 ************************************ 00:19:06.524 22:02:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:06.524 22:02:25 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:06.524 22:02:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:06.524 22:02:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:06.524 22:02:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:06.524 ************************************ 00:19:06.524 START TEST nvmf_nmic 00:19:06.524 ************************************ 00:19:06.524 22:02:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:06.524 * Looking for test storage... 00:19:06.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:06.524 22:02:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:06.524 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:19:06.524 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:06.524 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:06.524 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:06.524 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:06.524 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:06.524 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:06.524 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:06.524 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:06.524 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:06.524 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:06.524 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.524 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.524 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.524 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:19:06.525 22:02:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:09.057 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:09.057 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:09.057 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.057 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:09.058 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:09.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:09.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:19:09.058 00:19:09.058 --- 10.0.0.2 ping statistics --- 00:19:09.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.058 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:19:09.058 22:02:27 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:09.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:09.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:19:09.058 00:19:09.058 --- 10.0.0.1 ping statistics --- 00:19:09.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.058 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:19:09.058 22:02:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:09.058 22:02:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:19:09.058 22:02:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:09.058 22:02:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:09.058 22:02:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:09.058 22:02:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:09.058 22:02:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:09.058 22:02:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:09.058 22:02:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:09.058 22:02:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:09.058 22:02:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:09.058 22:02:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:09.058 22:02:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:09.058 22:02:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=4072137 00:19:09.058 22:02:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:09.058 22:02:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 4072137 00:19:09.058 22:02:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 4072137 ']' 00:19:09.058 22:02:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.058 22:02:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:09.058 22:02:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.058 22:02:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:09.058 22:02:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:09.058 [2024-07-13 22:02:28.113791] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:09.058 [2024-07-13 22:02:28.113971] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.058 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.058 [2024-07-13 22:02:28.248240] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:09.317 [2024-07-13 22:02:28.515681] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:09.317 [2024-07-13 22:02:28.515764] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:09.317 [2024-07-13 22:02:28.515792] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:09.317 [2024-07-13 22:02:28.515814] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:09.317 [2024-07-13 22:02:28.515837] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:09.317 [2024-07-13 22:02:28.515960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.317 [2024-07-13 22:02:28.516018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:09.317 [2024-07-13 22:02:28.516068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.317 [2024-07-13 22:02:28.516079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:09.883 [2024-07-13 22:02:29.094263] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:09.883 Malloc0 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:09.883 [2024-07-13 22:02:29.201265] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:09.883 test case1: single bdev can't be used in multiple subsystems 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.883 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:09.883 [2024-07-13 22:02:29.225063] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:09.884 [2024-07-13 22:02:29.225109] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:09.884 [2024-07-13 22:02:29.225134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:09.884 request: 00:19:09.884 { 00:19:09.884 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:09.884 "namespace": { 00:19:09.884 "bdev_name": "Malloc0", 00:19:09.884 "no_auto_visible": false 00:19:09.884 }, 00:19:09.884 "method": "nvmf_subsystem_add_ns", 00:19:09.884 "req_id": 1 00:19:09.884 } 00:19:09.884 Got JSON-RPC error response 00:19:09.884 response: 00:19:09.884 { 00:19:09.884 "code": -32602, 00:19:09.884 "message": "Invalid parameters" 00:19:09.884 } 00:19:09.884 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:09.884 22:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:19:09.884 22:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:09.884 22:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:09.884 Adding namespace failed - expected result. 00:19:09.884 22:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:09.884 test case2: host connect to nvmf target in multiple paths 00:19:09.884 22:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:09.884 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.884 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:09.884 [2024-07-13 22:02:29.233189] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:09.884 22:02:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.884 22:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:10.819 22:02:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:11.077 22:02:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:11.077 22:02:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:19:11.077 22:02:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:11.077 22:02:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:11.077 22:02:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:19:13.603 22:02:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:13.603 22:02:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:13.603 22:02:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:13.603 22:02:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:13.603 22:02:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:13.603 22:02:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:19:13.603 22:02:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:13.603 [global] 00:19:13.603 thread=1 00:19:13.603 invalidate=1 00:19:13.603 rw=write 00:19:13.603 time_based=1 00:19:13.603 runtime=1 00:19:13.603 ioengine=libaio 00:19:13.603 direct=1 00:19:13.603 bs=4096 00:19:13.603 iodepth=1 00:19:13.603 norandommap=0 00:19:13.603 numjobs=1 00:19:13.603 00:19:13.603 verify_dump=1 00:19:13.603 verify_backlog=512 00:19:13.603 verify_state_save=0 00:19:13.603 do_verify=1 00:19:13.603 verify=crc32c-intel 00:19:13.603 [job0] 00:19:13.603 filename=/dev/nvme0n1 00:19:13.603 Could not set queue depth (nvme0n1) 00:19:13.603 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:13.603 fio-3.35 00:19:13.603 Starting 1 thread 00:19:14.531 00:19:14.531 job0: (groupid=0, jobs=1): err= 0: pid=4072774: Sat Jul 13 22:02:33 2024 00:19:14.531 read: IOPS=19, BW=78.2KiB/s (80.1kB/s)(80.0KiB/1023msec) 00:19:14.531 slat (nsec): min=8352, max=35774, avg=24280.30, stdev=11170.81 00:19:14.531 clat (usec): min=40876, max=41179, avg=40982.97, stdev=74.72 00:19:14.531 lat (usec): min=40910, max=41187, avg=41007.25, stdev=69.55 00:19:14.531 clat percentiles (usec): 00:19:14.531 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:14.531 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:14.531 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:14.531 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:14.531 | 99.99th=[41157] 00:19:14.531 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:19:14.531 slat (usec): min=7, max=40578, avg=146.61, stdev=2230.40 00:19:14.531 clat (usec): min=216, max=403, avg=245.70, stdev=20.51 00:19:14.531 lat (usec): min=225, max=40896, avg=392.32, stdev=2236.90 00:19:14.531 clat percentiles (usec): 00:19:14.531 | 1.00th=[ 221], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 235], 00:19:14.531 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:19:14.531 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 258], 95.00th=[ 265], 00:19:14.531 | 99.00th=[ 367], 99.50th=[ 392], 99.90th=[ 404], 99.95th=[ 404], 00:19:14.531 | 99.99th=[ 404] 00:19:14.531 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:14.531 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:14.531 lat (usec) : 250=74.25%, 500=21.99% 00:19:14.531 lat (msec) : 50=3.76% 00:19:14.531 cpu : usr=0.49%, sys=0.39%, ctx=536, majf=0, minf=2 00:19:14.531 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.531 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.531 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:14.531 00:19:14.531 Run status group 0 (all jobs): 00:19:14.531 READ: bw=78.2KiB/s (80.1kB/s), 78.2KiB/s-78.2KiB/s (80.1kB/s-80.1kB/s), io=80.0KiB (81.9kB), run=1023-1023msec 00:19:14.531 WRITE: bw=2002KiB/s (2050kB/s), 2002KiB/s-2002KiB/s (2050kB/s-2050kB/s), io=2048KiB (2097kB), run=1023-1023msec 00:19:14.531 00:19:14.531 Disk stats (read/write): 00:19:14.531 nvme0n1: ios=41/512, merge=0/0, ticks=1640/123, in_queue=1763, util=99.50% 00:19:14.531 22:02:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:14.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:14.788 22:02:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:14.788 22:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:19:14.788 22:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:14.788 22:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:14.788 22:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:14.788 22:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:14.788 22:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:19:14.788 22:02:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:14.788 22:02:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:19:14.788 22:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:14.788 22:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:19:14.788 22:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:14.788 22:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:19:14.788 22:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:14.788 22:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:14.788 rmmod nvme_tcp 00:19:15.045 rmmod nvme_fabrics 00:19:15.045 rmmod nvme_keyring 00:19:15.045 22:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:15.045 22:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:19:15.045 22:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:19:15.045 22:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 4072137 ']' 00:19:15.045 22:02:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 4072137 00:19:15.045 22:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 4072137 ']' 00:19:15.045 22:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 4072137 00:19:15.045 22:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:19:15.046 22:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:15.046 22:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4072137 00:19:15.046 22:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:15.046 22:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:15.046 22:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4072137' 00:19:15.046 killing process with pid 4072137 00:19:15.046 22:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 4072137 00:19:15.046 22:02:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 4072137 00:19:16.419 22:02:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:16.419 22:02:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:16.419 22:02:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:16.419 22:02:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:16.419 22:02:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:16.419 22:02:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.419 22:02:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:16.419 22:02:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.953 22:02:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:18.953 00:19:18.953 real 0m12.011s 00:19:18.953 user 0m28.195s 00:19:18.953 sys 0m2.505s 00:19:18.953 22:02:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:18.953 22:02:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:18.953 ************************************ 00:19:18.953 END TEST nvmf_nmic 00:19:18.953 ************************************ 00:19:18.953 22:02:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:18.953 22:02:37 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:18.953 22:02:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:18.953 22:02:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:18.953 22:02:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:18.953 ************************************ 00:19:18.953 START TEST nvmf_fio_target 00:19:18.953 ************************************ 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:18.953 * Looking for test storage... 00:19:18.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:18.953 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:18.954 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:18.954 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:18.954 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.954 22:02:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:18.954 22:02:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.954 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:18.954 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:18.954 22:02:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:18.954 22:02:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:20.894 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:20.894 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:20.894 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:20.894 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:20.894 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:20.895 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:20.895 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:20.895 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:20.895 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:20.895 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:20.895 22:02:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:20.895 22:02:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:20.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:20.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:19:20.895 00:19:20.895 --- 10.0.0.2 ping statistics --- 00:19:20.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.895 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:19:20.895 22:02:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:20.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:20.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:19:20.895 00:19:20.895 --- 10.0.0.1 ping statistics --- 00:19:20.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.895 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:19:20.895 22:02:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:20.895 22:02:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:19:20.895 22:02:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:20.895 22:02:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:20.895 22:02:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:20.895 22:02:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:20.895 22:02:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:20.895 22:02:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:20.895 22:02:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:20.895 22:02:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:20.895 22:02:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:20.895 22:02:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:20.895 22:02:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.895 22:02:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=4075080 00:19:20.895 22:02:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:20.895 22:02:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 4075080 00:19:20.895 22:02:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 4075080 ']' 00:19:20.895 22:02:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.895 22:02:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:20.895 22:02:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.895 22:02:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:20.895 22:02:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.895 [2024-07-13 22:02:40.121767] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:20.895 [2024-07-13 22:02:40.121922] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.895 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.895 [2024-07-13 22:02:40.258369] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:21.153 [2024-07-13 22:02:40.526052] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:21.153 [2024-07-13 22:02:40.526133] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:21.153 [2024-07-13 22:02:40.526161] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:21.153 [2024-07-13 22:02:40.526182] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:21.153 [2024-07-13 22:02:40.526204] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:21.153 [2024-07-13 22:02:40.526325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.153 [2024-07-13 22:02:40.526383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.153 [2024-07-13 22:02:40.526429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.153 [2024-07-13 22:02:40.526440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:21.719 22:02:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:21.719 22:02:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:19:21.719 22:02:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:21.719 22:02:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:21.719 22:02:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.719 22:02:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.719 22:02:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:21.976 [2024-07-13 22:02:41.351968] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.234 22:02:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:22.492 22:02:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:22.492 22:02:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:22.749 22:02:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:22.749 22:02:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:23.007 22:02:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:23.007 22:02:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:23.265 22:02:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:23.265 22:02:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:23.522 22:02:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:23.780 22:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:23.780 22:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:24.344 22:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:24.344 22:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:24.601 22:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:24.601 22:02:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:24.859 22:02:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:25.116 22:02:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:25.116 22:02:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:25.373 22:02:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:25.373 22:02:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:25.631 22:02:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:25.631 [2024-07-13 22:02:45.016860] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.889 22:02:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:25.889 22:02:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:26.147 22:02:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:27.080 22:02:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:27.080 22:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:19:27.080 22:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:27.080 22:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:19:27.080 22:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:19:27.080 22:02:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:19:28.980 22:02:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:28.980 22:02:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:28.980 22:02:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:19:28.980 22:02:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:19:28.980 22:02:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:28.980 22:02:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:19:28.980 22:02:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:28.980 [global] 00:19:28.980 thread=1 00:19:28.980 invalidate=1 00:19:28.980 rw=write 00:19:28.980 time_based=1 00:19:28.980 runtime=1 00:19:28.980 ioengine=libaio 00:19:28.980 direct=1 00:19:28.980 bs=4096 00:19:28.980 iodepth=1 00:19:28.980 norandommap=0 00:19:28.980 numjobs=1 00:19:28.980 00:19:28.980 verify_dump=1 00:19:28.980 verify_backlog=512 00:19:28.980 verify_state_save=0 00:19:28.980 do_verify=1 00:19:28.980 verify=crc32c-intel 00:19:28.980 [job0] 00:19:28.980 filename=/dev/nvme0n1 00:19:28.980 [job1] 00:19:28.980 filename=/dev/nvme0n2 00:19:28.980 [job2] 00:19:28.980 filename=/dev/nvme0n3 00:19:28.980 [job3] 00:19:28.980 filename=/dev/nvme0n4 00:19:28.980 Could not set queue depth (nvme0n1) 00:19:28.980 Could not set queue depth (nvme0n2) 00:19:28.980 Could not set queue depth (nvme0n3) 00:19:28.980 Could not set queue depth (nvme0n4) 00:19:28.980 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:28.980 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:28.980 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:28.980 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:28.980 fio-3.35 00:19:28.980 Starting 4 threads 00:19:30.358 00:19:30.358 job0: (groupid=0, jobs=1): err= 0: pid=4076180: Sat Jul 13 22:02:49 2024 00:19:30.358 read: IOPS=512, BW=2049KiB/s (2098kB/s)(2088KiB/1019msec) 00:19:30.358 slat (nsec): min=5821, max=35440, avg=13303.09, stdev=6280.30 00:19:30.358 clat (usec): min=413, max=42210, avg=1285.94, stdev=5570.19 00:19:30.358 lat (usec): min=420, max=42226, avg=1299.24, stdev=5571.33 00:19:30.358 clat percentiles (usec): 00:19:30.358 | 1.00th=[ 433], 5.00th=[ 449], 10.00th=[ 465], 20.00th=[ 478], 00:19:30.358 | 30.00th=[ 486], 40.00th=[ 494], 50.00th=[ 502], 60.00th=[ 506], 00:19:30.358 | 70.00th=[ 515], 80.00th=[ 529], 90.00th=[ 570], 95.00th=[ 652], 00:19:30.358 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:19:30.358 | 99.99th=[42206] 00:19:30.358 write: IOPS=1004, BW=4020KiB/s (4116kB/s)(4096KiB/1019msec); 0 zone resets 00:19:30.358 slat (usec): min=7, max=23917, avg=39.92, stdev=746.94 00:19:30.358 clat (usec): min=227, max=551, avg=285.94, stdev=36.67 00:19:30.358 lat (usec): min=237, max=24342, avg=325.86, stdev=752.26 00:19:30.358 clat percentiles (usec): 00:19:30.359 | 1.00th=[ 237], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 260], 00:19:30.359 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 285], 00:19:30.359 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 322], 95.00th=[ 355], 00:19:30.359 | 99.00th=[ 441], 99.50th=[ 461], 99.90th=[ 482], 99.95th=[ 553], 00:19:30.359 | 99.99th=[ 553] 00:19:30.359 bw ( KiB/s): min= 2808, max= 5384, per=33.31%, avg=4096.00, stdev=1821.51, samples=2 00:19:30.359 iops : min= 702, max= 1346, avg=1024.00, stdev=455.38, samples=2 00:19:30.359 lat (usec) : 250=6.40%, 500=76.65%, 750=15.91%, 1000=0.39% 00:19:30.359 lat (msec) : 50=0.65% 00:19:30.359 cpu : usr=1.67%, sys=3.05%, ctx=1549, majf=0, minf=2 00:19:30.359 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:30.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.359 issued rwts: total=522,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.359 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:30.359 job1: (groupid=0, jobs=1): err= 0: pid=4076182: Sat Jul 13 22:02:49 2024 00:19:30.359 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:30.359 slat (nsec): min=6558, max=77018, avg=23614.97, stdev=11491.30 00:19:30.359 clat (usec): min=375, max=785, avg=520.67, stdev=81.88 00:19:30.359 lat (usec): min=388, max=806, avg=544.29, stdev=86.90 00:19:30.359 clat percentiles (usec): 00:19:30.359 | 1.00th=[ 396], 5.00th=[ 429], 10.00th=[ 437], 20.00th=[ 453], 00:19:30.359 | 30.00th=[ 465], 40.00th=[ 482], 50.00th=[ 498], 60.00th=[ 523], 00:19:30.359 | 70.00th=[ 553], 80.00th=[ 578], 90.00th=[ 652], 95.00th=[ 693], 00:19:30.359 | 99.00th=[ 750], 99.50th=[ 758], 99.90th=[ 775], 99.95th=[ 783], 00:19:30.359 | 99.99th=[ 783] 00:19:30.359 write: IOPS=1123, BW=4496KiB/s (4603kB/s)(4500KiB/1001msec); 0 zone resets 00:19:30.359 slat (nsec): min=6001, max=85371, avg=23143.81, stdev=14229.30 00:19:30.359 clat (usec): min=237, max=619, avg=358.20, stdev=72.72 00:19:30.359 lat (usec): min=243, max=660, avg=381.34, stdev=79.77 00:19:30.359 clat percentiles (usec): 00:19:30.359 | 1.00th=[ 245], 5.00th=[ 255], 10.00th=[ 265], 20.00th=[ 281], 00:19:30.359 | 30.00th=[ 310], 40.00th=[ 330], 50.00th=[ 363], 60.00th=[ 379], 00:19:30.359 | 70.00th=[ 396], 80.00th=[ 416], 90.00th=[ 457], 95.00th=[ 486], 00:19:30.359 | 99.00th=[ 537], 99.50th=[ 553], 99.90th=[ 578], 99.95th=[ 619], 00:19:30.359 | 99.99th=[ 619] 00:19:30.359 bw ( KiB/s): min= 4096, max= 4096, per=33.31%, avg=4096.00, stdev= 0.00, samples=1 00:19:30.359 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:30.359 lat (usec) : 250=1.21%, 500=73.38%, 750=25.03%, 1000=0.37% 00:19:30.359 cpu : usr=3.10%, sys=4.70%, ctx=2150, majf=0, minf=1 00:19:30.359 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:30.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.359 issued rwts: total=1024,1125,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.359 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:30.359 job2: (groupid=0, jobs=1): err= 0: pid=4076183: Sat Jul 13 22:02:49 2024 00:19:30.359 read: IOPS=19, BW=78.5KiB/s (80.4kB/s)(80.0KiB/1019msec) 00:19:30.359 slat (nsec): min=7682, max=44525, avg=24247.60, stdev=11683.63 00:19:30.359 clat (usec): min=40896, max=41037, avg=40969.91, stdev=33.47 00:19:30.359 lat (usec): min=40930, max=41051, avg=40994.16, stdev=27.11 00:19:30.359 clat percentiles (usec): 00:19:30.359 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:30.359 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:30.359 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:30.359 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:30.359 | 99.99th=[41157] 00:19:30.359 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:19:30.359 slat (nsec): min=6500, max=78465, avg=25578.68, stdev=13042.79 00:19:30.359 clat (usec): min=252, max=672, avg=356.86, stdev=57.04 00:19:30.359 lat (usec): min=272, max=688, avg=382.44, stdev=58.31 00:19:30.359 clat percentiles (usec): 00:19:30.359 | 1.00th=[ 262], 5.00th=[ 273], 10.00th=[ 285], 20.00th=[ 302], 00:19:30.359 | 30.00th=[ 318], 40.00th=[ 330], 50.00th=[ 355], 60.00th=[ 379], 00:19:30.359 | 70.00th=[ 396], 80.00th=[ 408], 90.00th=[ 429], 95.00th=[ 445], 00:19:30.359 | 99.00th=[ 478], 99.50th=[ 498], 99.90th=[ 676], 99.95th=[ 676], 00:19:30.359 | 99.99th=[ 676] 00:19:30.359 bw ( KiB/s): min= 4096, max= 4096, per=33.31%, avg=4096.00, stdev= 0.00, samples=1 00:19:30.359 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:30.359 lat (usec) : 500=95.86%, 750=0.38% 00:19:30.359 lat (msec) : 50=3.76% 00:19:30.359 cpu : usr=0.39%, sys=1.47%, ctx=532, majf=0, minf=1 00:19:30.359 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:30.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.359 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.359 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:30.359 job3: (groupid=0, jobs=1): err= 0: pid=4076184: Sat Jul 13 22:02:49 2024 00:19:30.359 read: IOPS=445, BW=1783KiB/s (1826kB/s)(1840KiB/1032msec) 00:19:30.359 slat (nsec): min=6714, max=31384, avg=10258.88, stdev=3427.09 00:19:30.359 clat (usec): min=421, max=41041, avg=1828.78, stdev=7190.64 00:19:30.359 lat (usec): min=428, max=41068, avg=1839.04, stdev=7192.33 00:19:30.359 clat percentiles (usec): 00:19:30.359 | 1.00th=[ 429], 5.00th=[ 445], 10.00th=[ 449], 20.00th=[ 465], 00:19:30.359 | 30.00th=[ 474], 40.00th=[ 486], 50.00th=[ 498], 60.00th=[ 515], 00:19:30.359 | 70.00th=[ 537], 80.00th=[ 562], 90.00th=[ 594], 95.00th=[ 701], 00:19:30.359 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:30.359 | 99.99th=[41157] 00:19:30.359 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:19:30.359 slat (nsec): min=7946, max=66593, avg=13977.55, stdev=5950.95 00:19:30.359 clat (usec): min=253, max=601, avg=342.10, stdev=55.61 00:19:30.359 lat (usec): min=267, max=614, avg=356.07, stdev=57.04 00:19:30.359 clat percentiles (usec): 00:19:30.359 | 1.00th=[ 262], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 285], 00:19:30.359 | 30.00th=[ 306], 40.00th=[ 322], 50.00th=[ 334], 60.00th=[ 351], 00:19:30.359 | 70.00th=[ 371], 80.00th=[ 392], 90.00th=[ 420], 95.00th=[ 441], 00:19:30.359 | 99.00th=[ 478], 99.50th=[ 515], 99.90th=[ 603], 99.95th=[ 603], 00:19:30.359 | 99.99th=[ 603] 00:19:30.359 bw ( KiB/s): min= 4096, max= 4096, per=33.31%, avg=4096.00, stdev= 0.00, samples=1 00:19:30.359 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:30.359 lat (usec) : 500=75.82%, 750=22.63% 00:19:30.359 lat (msec) : 50=1.54% 00:19:30.359 cpu : usr=0.68%, sys=1.45%, ctx=974, majf=0, minf=1 00:19:30.359 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:30.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.359 issued rwts: total=460,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.359 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:30.359 00:19:30.359 Run status group 0 (all jobs): 00:19:30.359 READ: bw=7853KiB/s (8041kB/s), 78.5KiB/s-4092KiB/s (80.4kB/s-4190kB/s), io=8104KiB (8298kB), run=1001-1032msec 00:19:30.359 WRITE: bw=12.0MiB/s (12.6MB/s), 1984KiB/s-4496KiB/s (2032kB/s-4603kB/s), io=12.4MiB (13.0MB), run=1001-1032msec 00:19:30.359 00:19:30.359 Disk stats (read/write): 00:19:30.359 nvme0n1: ios=542/1024, merge=0/0, ticks=1416/280, in_queue=1696, util=94.09% 00:19:30.359 nvme0n2: ios=832/1024, merge=0/0, ticks=1016/345, in_queue=1361, util=98.47% 00:19:30.359 nvme0n3: ios=72/512, merge=0/0, ticks=675/169, in_queue=844, util=91.84% 00:19:30.359 nvme0n4: ios=511/512, merge=0/0, ticks=705/161, in_queue=866, util=92.52% 00:19:30.359 22:02:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:30.359 [global] 00:19:30.359 thread=1 00:19:30.359 invalidate=1 00:19:30.359 rw=randwrite 00:19:30.359 time_based=1 00:19:30.359 runtime=1 00:19:30.359 ioengine=libaio 00:19:30.359 direct=1 00:19:30.359 bs=4096 00:19:30.359 iodepth=1 00:19:30.359 norandommap=0 00:19:30.359 numjobs=1 00:19:30.359 00:19:30.359 verify_dump=1 00:19:30.359 verify_backlog=512 00:19:30.359 verify_state_save=0 00:19:30.359 do_verify=1 00:19:30.359 verify=crc32c-intel 00:19:30.359 [job0] 00:19:30.359 filename=/dev/nvme0n1 00:19:30.359 [job1] 00:19:30.359 filename=/dev/nvme0n2 00:19:30.359 [job2] 00:19:30.359 filename=/dev/nvme0n3 00:19:30.359 [job3] 00:19:30.359 filename=/dev/nvme0n4 00:19:30.359 Could not set queue depth (nvme0n1) 00:19:30.359 Could not set queue depth (nvme0n2) 00:19:30.359 Could not set queue depth (nvme0n3) 00:19:30.359 Could not set queue depth (nvme0n4) 00:19:30.619 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:30.619 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:30.619 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:30.619 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:30.619 fio-3.35 00:19:30.619 Starting 4 threads 00:19:31.997 00:19:31.997 job0: (groupid=0, jobs=1): err= 0: pid=4076407: Sat Jul 13 22:02:51 2024 00:19:31.997 read: IOPS=775, BW=3101KiB/s (3175kB/s)(3104KiB/1001msec) 00:19:31.997 slat (nsec): min=8437, max=59706, avg=21713.05, stdev=8001.86 00:19:31.997 clat (usec): min=347, max=41461, avg=810.96, stdev=3390.39 00:19:31.997 lat (usec): min=359, max=41479, avg=832.67, stdev=3389.81 00:19:31.997 clat percentiles (usec): 00:19:31.997 | 1.00th=[ 363], 5.00th=[ 396], 10.00th=[ 416], 20.00th=[ 441], 00:19:31.997 | 30.00th=[ 482], 40.00th=[ 506], 50.00th=[ 529], 60.00th=[ 537], 00:19:31.997 | 70.00th=[ 545], 80.00th=[ 562], 90.00th=[ 619], 95.00th=[ 652], 00:19:31.997 | 99.00th=[ 799], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:19:31.997 | 99.99th=[41681] 00:19:31.997 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:19:31.997 slat (nsec): min=6366, max=62402, avg=19206.06, stdev=7789.83 00:19:31.997 clat (usec): min=230, max=2234, avg=316.13, stdev=88.34 00:19:31.997 lat (usec): min=238, max=2254, avg=335.34, stdev=88.32 00:19:31.997 clat percentiles (usec): 00:19:31.997 | 1.00th=[ 237], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 262], 00:19:31.997 | 30.00th=[ 273], 40.00th=[ 289], 50.00th=[ 306], 60.00th=[ 322], 00:19:31.997 | 70.00th=[ 334], 80.00th=[ 359], 90.00th=[ 388], 95.00th=[ 416], 00:19:31.997 | 99.00th=[ 486], 99.50th=[ 519], 99.90th=[ 1319], 99.95th=[ 2245], 00:19:31.997 | 99.99th=[ 2245] 00:19:31.997 bw ( KiB/s): min= 4096, max= 4096, per=26.57%, avg=4096.00, stdev= 0.00, samples=1 00:19:31.997 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:31.997 lat (usec) : 250=5.00%, 500=68.17%, 750=26.22%, 1000=0.17% 00:19:31.997 lat (msec) : 2=0.06%, 4=0.06%, 50=0.33% 00:19:31.997 cpu : usr=2.20%, sys=4.50%, ctx=1800, majf=0, minf=2 00:19:31.997 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.997 issued rwts: total=776,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.997 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:31.997 job1: (groupid=0, jobs=1): err= 0: pid=4076408: Sat Jul 13 22:02:51 2024 00:19:31.997 read: IOPS=19, BW=79.2KiB/s (81.1kB/s)(80.0KiB/1010msec) 00:19:31.997 slat (nsec): min=13832, max=34989, avg=29343.30, stdev=8523.72 00:19:31.997 clat (usec): min=40799, max=41437, avg=40984.66, stdev=132.45 00:19:31.997 lat (usec): min=40833, max=41451, avg=41014.00, stdev=128.21 00:19:31.997 clat percentiles (usec): 00:19:31.997 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:19:31.997 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:31.997 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:31.997 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:19:31.997 | 99.99th=[41681] 00:19:31.997 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:19:31.997 slat (nsec): min=7850, max=71651, avg=15227.23, stdev=8115.41 00:19:31.997 clat (usec): min=243, max=2317, avg=342.17, stdev=134.04 00:19:31.997 lat (usec): min=252, max=2337, avg=357.39, stdev=135.80 00:19:31.997 clat percentiles (usec): 00:19:31.997 | 1.00th=[ 251], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 277], 00:19:31.997 | 30.00th=[ 289], 40.00th=[ 302], 50.00th=[ 318], 60.00th=[ 338], 00:19:31.997 | 70.00th=[ 355], 80.00th=[ 379], 90.00th=[ 424], 95.00th=[ 494], 00:19:31.997 | 99.00th=[ 611], 99.50th=[ 799], 99.90th=[ 2311], 99.95th=[ 2311], 00:19:31.997 | 99.99th=[ 2311] 00:19:31.997 bw ( KiB/s): min= 4096, max= 4096, per=26.57%, avg=4096.00, stdev= 0.00, samples=1 00:19:31.997 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:31.997 lat (usec) : 250=1.13%, 500=90.41%, 750=4.14%, 1000=0.19% 00:19:31.997 lat (msec) : 2=0.19%, 4=0.19%, 50=3.76% 00:19:31.997 cpu : usr=0.99%, sys=0.59%, ctx=534, majf=0, minf=1 00:19:31.997 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.997 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.997 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:31.997 job2: (groupid=0, jobs=1): err= 0: pid=4076409: Sat Jul 13 22:02:51 2024 00:19:31.997 read: IOPS=688, BW=2753KiB/s (2819kB/s)(2756KiB/1001msec) 00:19:31.997 slat (nsec): min=9432, max=52696, avg=22716.45, stdev=7977.44 00:19:31.997 clat (usec): min=387, max=41090, avg=879.90, stdev=3762.08 00:19:31.997 lat (usec): min=415, max=41101, avg=902.61, stdev=3761.15 00:19:31.997 clat percentiles (usec): 00:19:31.997 | 1.00th=[ 408], 5.00th=[ 433], 10.00th=[ 445], 20.00th=[ 465], 00:19:31.997 | 30.00th=[ 486], 40.00th=[ 515], 50.00th=[ 529], 60.00th=[ 537], 00:19:31.997 | 70.00th=[ 545], 80.00th=[ 562], 90.00th=[ 619], 95.00th=[ 652], 00:19:31.997 | 99.00th=[ 1549], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:31.997 | 99.99th=[41157] 00:19:31.997 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:19:31.997 slat (nsec): min=7352, max=81617, avg=20825.57, stdev=9903.96 00:19:31.997 clat (usec): min=244, max=2443, avg=335.27, stdev=81.84 00:19:31.997 lat (usec): min=267, max=2465, avg=356.09, stdev=83.06 00:19:31.997 clat percentiles (usec): 00:19:31.997 | 1.00th=[ 262], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 293], 00:19:31.997 | 30.00th=[ 302], 40.00th=[ 314], 50.00th=[ 326], 60.00th=[ 334], 00:19:31.997 | 70.00th=[ 347], 80.00th=[ 371], 90.00th=[ 400], 95.00th=[ 424], 00:19:31.997 | 99.00th=[ 482], 99.50th=[ 519], 99.90th=[ 611], 99.95th=[ 2442], 00:19:31.997 | 99.99th=[ 2442] 00:19:31.997 bw ( KiB/s): min= 4096, max= 4096, per=26.57%, avg=4096.00, stdev= 0.00, samples=1 00:19:31.997 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:31.997 lat (usec) : 250=0.06%, 500=73.44%, 750=25.39%, 1000=0.64% 00:19:31.997 lat (msec) : 2=0.06%, 4=0.06%, 50=0.35% 00:19:31.997 cpu : usr=1.80%, sys=5.20%, ctx=1714, majf=0, minf=1 00:19:31.997 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.997 issued rwts: total=689,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.997 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:31.997 job3: (groupid=0, jobs=1): err= 0: pid=4076410: Sat Jul 13 22:02:51 2024 00:19:31.997 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:31.997 slat (nsec): min=6174, max=68561, avg=17891.81, stdev=9089.20 00:19:31.997 clat (usec): min=345, max=748, avg=448.94, stdev=75.47 00:19:31.997 lat (usec): min=352, max=770, avg=466.83, stdev=79.02 00:19:31.997 clat percentiles (usec): 00:19:31.997 | 1.00th=[ 355], 5.00th=[ 363], 10.00th=[ 367], 20.00th=[ 383], 00:19:31.997 | 30.00th=[ 396], 40.00th=[ 408], 50.00th=[ 420], 60.00th=[ 461], 00:19:31.997 | 70.00th=[ 490], 80.00th=[ 515], 90.00th=[ 553], 95.00th=[ 594], 00:19:31.997 | 99.00th=[ 668], 99.50th=[ 685], 99.90th=[ 750], 99.95th=[ 750], 00:19:31.997 | 99.99th=[ 750] 00:19:31.997 write: IOPS=1331, BW=5327KiB/s (5455kB/s)(5332KiB/1001msec); 0 zone resets 00:19:31.997 slat (nsec): min=6585, max=63091, avg=23932.57, stdev=11151.97 00:19:31.997 clat (usec): min=252, max=1430, avg=357.02, stdev=70.76 00:19:31.997 lat (usec): min=260, max=1457, avg=380.95, stdev=74.49 00:19:31.997 clat percentiles (usec): 00:19:31.997 | 1.00th=[ 265], 5.00th=[ 281], 10.00th=[ 297], 20.00th=[ 310], 00:19:31.997 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 347], 60.00th=[ 363], 00:19:31.997 | 70.00th=[ 383], 80.00th=[ 400], 90.00th=[ 416], 95.00th=[ 441], 00:19:31.997 | 99.00th=[ 506], 99.50th=[ 594], 99.90th=[ 1369], 99.95th=[ 1434], 00:19:31.997 | 99.99th=[ 1434] 00:19:31.997 bw ( KiB/s): min= 5816, max= 5816, per=37.72%, avg=5816.00, stdev= 0.00, samples=1 00:19:31.997 iops : min= 1454, max= 1454, avg=1454.00, stdev= 0.00, samples=1 00:19:31.997 lat (usec) : 500=88.42%, 750=11.37%, 1000=0.08% 00:19:31.997 lat (msec) : 2=0.13% 00:19:31.997 cpu : usr=4.10%, sys=6.30%, ctx=2357, majf=0, minf=1 00:19:31.997 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.997 issued rwts: total=1024,1333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.997 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:31.997 00:19:31.997 Run status group 0 (all jobs): 00:19:31.997 READ: bw=9937KiB/s (10.2MB/s), 79.2KiB/s-4092KiB/s (81.1kB/s-4190kB/s), io=9.80MiB (10.3MB), run=1001-1010msec 00:19:31.997 WRITE: bw=15.1MiB/s (15.8MB/s), 2028KiB/s-5327KiB/s (2076kB/s-5455kB/s), io=15.2MiB (15.9MB), run=1001-1010msec 00:19:31.997 00:19:31.997 Disk stats (read/write): 00:19:31.997 nvme0n1: ios=562/973, merge=0/0, ticks=626/293, in_queue=919, util=91.68% 00:19:31.997 nvme0n2: ios=51/512, merge=0/0, ticks=1578/167, in_queue=1745, util=96.44% 00:19:31.997 nvme0n3: ios=570/857, merge=0/0, ticks=1145/279, in_queue=1424, util=93.33% 00:19:31.997 nvme0n4: ios=1039/1024, merge=0/0, ticks=522/346, in_queue=868, util=96.11% 00:19:31.997 22:02:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:31.997 [global] 00:19:31.997 thread=1 00:19:31.997 invalidate=1 00:19:31.997 rw=write 00:19:31.997 time_based=1 00:19:31.997 runtime=1 00:19:31.997 ioengine=libaio 00:19:31.997 direct=1 00:19:31.997 bs=4096 00:19:31.997 iodepth=128 00:19:31.997 norandommap=0 00:19:31.997 numjobs=1 00:19:31.997 00:19:31.997 verify_dump=1 00:19:31.997 verify_backlog=512 00:19:31.997 verify_state_save=0 00:19:31.997 do_verify=1 00:19:31.997 verify=crc32c-intel 00:19:31.997 [job0] 00:19:31.997 filename=/dev/nvme0n1 00:19:31.997 [job1] 00:19:31.997 filename=/dev/nvme0n2 00:19:31.997 [job2] 00:19:31.997 filename=/dev/nvme0n3 00:19:31.997 [job3] 00:19:31.997 filename=/dev/nvme0n4 00:19:31.997 Could not set queue depth (nvme0n1) 00:19:31.997 Could not set queue depth (nvme0n2) 00:19:31.997 Could not set queue depth (nvme0n3) 00:19:31.997 Could not set queue depth (nvme0n4) 00:19:31.997 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:31.997 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:31.997 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:31.997 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:31.997 fio-3.35 00:19:31.997 Starting 4 threads 00:19:33.368 00:19:33.368 job0: (groupid=0, jobs=1): err= 0: pid=4076688: Sat Jul 13 22:02:52 2024 00:19:33.368 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:19:33.368 slat (usec): min=3, max=41537, avg=252.06, stdev=1819.00 00:19:33.368 clat (msec): min=11, max=119, avg=32.99, stdev=26.96 00:19:33.368 lat (msec): min=11, max=119, avg=33.24, stdev=27.10 00:19:33.368 clat percentiles (msec): 00:19:33.369 | 1.00th=[ 13], 5.00th=[ 14], 10.00th=[ 14], 20.00th=[ 15], 00:19:33.369 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 22], 60.00th=[ 28], 00:19:33.369 | 70.00th=[ 32], 80.00th=[ 52], 90.00th=[ 73], 95.00th=[ 91], 00:19:33.369 | 99.00th=[ 120], 99.50th=[ 120], 99.90th=[ 120], 99.95th=[ 120], 00:19:33.369 | 99.99th=[ 120] 00:19:33.369 write: IOPS=2582, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1004msec); 0 zone resets 00:19:33.369 slat (usec): min=4, max=8703, avg=125.47, stdev=616.36 00:19:33.369 clat (usec): min=1828, max=29418, avg=16136.29, stdev=3451.98 00:19:33.369 lat (usec): min=7543, max=29434, avg=16261.76, stdev=3431.17 00:19:33.369 clat percentiles (usec): 00:19:33.369 | 1.00th=[ 7898], 5.00th=[11863], 10.00th=[12780], 20.00th=[13435], 00:19:33.369 | 30.00th=[13829], 40.00th=[14353], 50.00th=[15008], 60.00th=[16319], 00:19:33.369 | 70.00th=[19006], 80.00th=[19268], 90.00th=[19530], 95.00th=[20317], 00:19:33.369 | 99.00th=[29230], 99.50th=[29492], 99.90th=[29492], 99.95th=[29492], 00:19:33.369 | 99.99th=[29492] 00:19:33.369 bw ( KiB/s): min= 7080, max=13400, per=19.26%, avg=10240.00, stdev=4468.91, samples=2 00:19:33.369 iops : min= 1770, max= 3350, avg=2560.00, stdev=1117.23, samples=2 00:19:33.369 lat (msec) : 2=0.02%, 10=0.76%, 20=70.42%, 50=18.01%, 100=8.36% 00:19:33.369 lat (msec) : 250=2.43% 00:19:33.369 cpu : usr=3.09%, sys=5.68%, ctx=264, majf=0, minf=1 00:19:33.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:33.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:33.369 issued rwts: total=2560,2593,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:33.369 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:33.369 job1: (groupid=0, jobs=1): err= 0: pid=4076709: Sat Jul 13 22:02:52 2024 00:19:33.369 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec) 00:19:33.369 slat (usec): min=3, max=15216, avg=129.09, stdev=1007.18 00:19:33.369 clat (usec): min=5058, max=47242, avg=16686.96, stdev=4421.78 00:19:33.369 lat (usec): min=5064, max=47249, avg=16816.05, stdev=4523.81 00:19:33.369 clat percentiles (usec): 00:19:33.369 | 1.00th=[ 7701], 5.00th=[12518], 10.00th=[13435], 20.00th=[14222], 00:19:33.369 | 30.00th=[14746], 40.00th=[15401], 50.00th=[15795], 60.00th=[16712], 00:19:33.369 | 70.00th=[17433], 80.00th=[17695], 90.00th=[21627], 95.00th=[23200], 00:19:33.369 | 99.00th=[36439], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:19:33.369 | 99.99th=[47449] 00:19:33.369 write: IOPS=3436, BW=13.4MiB/s (14.1MB/s)(13.6MiB/1012msec); 0 zone resets 00:19:33.369 slat (usec): min=4, max=25032, avg=157.52, stdev=916.42 00:19:33.369 clat (usec): min=3597, max=46039, avg=21385.48, stdev=8533.19 00:19:33.369 lat (usec): min=3604, max=49283, avg=21543.00, stdev=8603.32 00:19:33.369 clat percentiles (usec): 00:19:33.369 | 1.00th=[ 7635], 5.00th=[ 9110], 10.00th=[10683], 20.00th=[14222], 00:19:33.369 | 30.00th=[15795], 40.00th=[16909], 50.00th=[19268], 60.00th=[23462], 00:19:33.369 | 70.00th=[26346], 80.00th=[29230], 90.00th=[34866], 95.00th=[38011], 00:19:33.369 | 99.00th=[38536], 99.50th=[40109], 99.90th=[44827], 99.95th=[44827], 00:19:33.369 | 99.99th=[45876] 00:19:33.369 bw ( KiB/s): min=12600, max=14208, per=25.21%, avg=13404.00, stdev=1137.03, samples=2 00:19:33.369 iops : min= 3150, max= 3552, avg=3351.00, stdev=284.26, samples=2 00:19:33.369 lat (msec) : 4=0.09%, 10=5.47%, 20=62.73%, 50=31.71% 00:19:33.369 cpu : usr=2.87%, sys=4.45%, ctx=325, majf=0, minf=1 00:19:33.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:19:33.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:33.369 issued rwts: total=3072,3478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:33.369 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:33.369 job2: (groupid=0, jobs=1): err= 0: pid=4076743: Sat Jul 13 22:02:52 2024 00:19:33.369 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:19:33.369 slat (usec): min=3, max=15683, avg=148.26, stdev=847.89 00:19:33.369 clat (usec): min=10478, max=56256, avg=19613.74, stdev=9287.93 00:19:33.369 lat (usec): min=10485, max=56278, avg=19762.00, stdev=9368.82 00:19:33.369 clat percentiles (usec): 00:19:33.369 | 1.00th=[11600], 5.00th=[13042], 10.00th=[13829], 20.00th=[14484], 00:19:33.369 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15664], 60.00th=[16581], 00:19:33.369 | 70.00th=[17695], 80.00th=[21627], 90.00th=[37487], 95.00th=[43779], 00:19:33.369 | 99.00th=[49546], 99.50th=[52167], 99.90th=[53740], 99.95th=[55313], 00:19:33.369 | 99.99th=[56361] 00:19:33.369 write: IOPS=3274, BW=12.8MiB/s (13.4MB/s)(12.8MiB/1003msec); 0 zone resets 00:19:33.369 slat (usec): min=3, max=18140, avg=155.03, stdev=855.83 00:19:33.369 clat (usec): min=318, max=60159, avg=19512.67, stdev=9565.69 00:19:33.369 lat (usec): min=4394, max=60172, avg=19667.71, stdev=9641.73 00:19:33.369 clat percentiles (usec): 00:19:33.369 | 1.00th=[ 4817], 5.00th=[11600], 10.00th=[13435], 20.00th=[14353], 00:19:33.369 | 30.00th=[14615], 40.00th=[14877], 50.00th=[15139], 60.00th=[15926], 00:19:33.369 | 70.00th=[18744], 80.00th=[25035], 90.00th=[35914], 95.00th=[45351], 00:19:33.369 | 99.00th=[48497], 99.50th=[50594], 99.90th=[52167], 99.95th=[52167], 00:19:33.369 | 99.99th=[60031] 00:19:33.369 bw ( KiB/s): min=10024, max=15224, per=23.74%, avg=12624.00, stdev=3676.96, samples=2 00:19:33.369 iops : min= 2506, max= 3806, avg=3156.00, stdev=919.24, samples=2 00:19:33.369 lat (usec) : 500=0.02% 00:19:33.369 lat (msec) : 10=1.43%, 20=74.24%, 50=23.51%, 100=0.80% 00:19:33.369 cpu : usr=3.49%, sys=8.18%, ctx=349, majf=0, minf=1 00:19:33.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:33.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:33.369 issued rwts: total=3072,3284,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:33.369 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:33.369 job3: (groupid=0, jobs=1): err= 0: pid=4076755: Sat Jul 13 22:02:52 2024 00:19:33.369 read: IOPS=4008, BW=15.7MiB/s (16.4MB/s)(15.7MiB/1004msec) 00:19:33.369 slat (usec): min=2, max=12113, avg=125.47, stdev=694.37 00:19:33.369 clat (usec): min=735, max=32266, avg=16078.08, stdev=2992.99 00:19:33.369 lat (usec): min=4252, max=32277, avg=16203.55, stdev=3007.40 00:19:33.369 clat percentiles (usec): 00:19:33.369 | 1.00th=[ 8848], 5.00th=[12518], 10.00th=[13173], 20.00th=[14615], 00:19:33.369 | 30.00th=[15270], 40.00th=[15533], 50.00th=[15795], 60.00th=[16057], 00:19:33.369 | 70.00th=[16909], 80.00th=[17433], 90.00th=[18220], 95.00th=[20055], 00:19:33.369 | 99.00th=[27657], 99.50th=[30016], 99.90th=[30016], 99.95th=[30016], 00:19:33.369 | 99.99th=[32375] 00:19:33.369 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:19:33.369 slat (usec): min=3, max=20199, avg=114.49, stdev=609.19 00:19:33.369 clat (usec): min=5637, max=28175, avg=15175.23, stdev=2994.45 00:19:33.369 lat (usec): min=5641, max=40284, avg=15289.72, stdev=2993.10 00:19:33.369 clat percentiles (usec): 00:19:33.369 | 1.00th=[ 9372], 5.00th=[11076], 10.00th=[11600], 20.00th=[12911], 00:19:33.369 | 30.00th=[13960], 40.00th=[14877], 50.00th=[15008], 60.00th=[15664], 00:19:33.369 | 70.00th=[16188], 80.00th=[16909], 90.00th=[17695], 95.00th=[19268], 00:19:33.369 | 99.00th=[26870], 99.50th=[28181], 99.90th=[28181], 99.95th=[28181], 00:19:33.369 | 99.99th=[28181] 00:19:33.369 bw ( KiB/s): min=16384, max=16384, per=30.82%, avg=16384.00, stdev= 0.00, samples=2 00:19:33.369 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:19:33.369 lat (usec) : 750=0.01% 00:19:33.369 lat (msec) : 10=1.67%, 20=93.73%, 50=4.58% 00:19:33.369 cpu : usr=3.39%, sys=4.89%, ctx=474, majf=0, minf=1 00:19:33.369 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:33.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:33.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:33.369 issued rwts: total=4025,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:33.369 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:33.369 00:19:33.369 Run status group 0 (all jobs): 00:19:33.369 READ: bw=49.1MiB/s (51.5MB/s), 9.96MiB/s-15.7MiB/s (10.4MB/s-16.4MB/s), io=49.7MiB (52.1MB), run=1003-1012msec 00:19:33.369 WRITE: bw=51.9MiB/s (54.4MB/s), 10.1MiB/s-15.9MiB/s (10.6MB/s-16.7MB/s), io=52.5MiB (55.1MB), run=1003-1012msec 00:19:33.369 00:19:33.369 Disk stats (read/write): 00:19:33.369 nvme0n1: ios=2260/2560, merge=0/0, ticks=16318/9581, in_queue=25899, util=97.60% 00:19:33.369 nvme0n2: ios=2576/2671, merge=0/0, ticks=34887/46881, in_queue=81768, util=94.31% 00:19:33.369 nvme0n3: ios=2486/2560, merge=0/0, ticks=16728/17724, in_queue=34452, util=96.34% 00:19:33.369 nvme0n4: ios=3280/3584, merge=0/0, ticks=18003/17880, in_queue=35883, util=89.53% 00:19:33.369 22:02:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:33.369 [global] 00:19:33.369 thread=1 00:19:33.369 invalidate=1 00:19:33.369 rw=randwrite 00:19:33.369 time_based=1 00:19:33.369 runtime=1 00:19:33.369 ioengine=libaio 00:19:33.369 direct=1 00:19:33.369 bs=4096 00:19:33.369 iodepth=128 00:19:33.369 norandommap=0 00:19:33.369 numjobs=1 00:19:33.369 00:19:33.369 verify_dump=1 00:19:33.369 verify_backlog=512 00:19:33.369 verify_state_save=0 00:19:33.369 do_verify=1 00:19:33.369 verify=crc32c-intel 00:19:33.369 [job0] 00:19:33.369 filename=/dev/nvme0n1 00:19:33.369 [job1] 00:19:33.369 filename=/dev/nvme0n2 00:19:33.369 [job2] 00:19:33.369 filename=/dev/nvme0n3 00:19:33.369 [job3] 00:19:33.369 filename=/dev/nvme0n4 00:19:33.369 Could not set queue depth (nvme0n1) 00:19:33.369 Could not set queue depth (nvme0n2) 00:19:33.369 Could not set queue depth (nvme0n3) 00:19:33.369 Could not set queue depth (nvme0n4) 00:19:33.369 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:33.369 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:33.369 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:33.369 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:33.369 fio-3.35 00:19:33.369 Starting 4 threads 00:19:34.743 00:19:34.743 job0: (groupid=0, jobs=1): err= 0: pid=4076988: Sat Jul 13 22:02:53 2024 00:19:34.743 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:19:34.743 slat (usec): min=2, max=17968, avg=112.98, stdev=722.15 00:19:34.743 clat (usec): min=4919, max=45904, avg=14646.08, stdev=4769.57 00:19:34.743 lat (usec): min=4922, max=45910, avg=14759.06, stdev=4804.83 00:19:34.743 clat percentiles (usec): 00:19:34.743 | 1.00th=[ 5932], 5.00th=[10290], 10.00th=[11076], 20.00th=[12387], 00:19:34.743 | 30.00th=[12649], 40.00th=[13173], 50.00th=[13698], 60.00th=[14091], 00:19:34.743 | 70.00th=[14877], 80.00th=[15926], 90.00th=[18482], 95.00th=[20841], 00:19:34.743 | 99.00th=[41157], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:19:34.743 | 99.99th=[45876] 00:19:34.743 write: IOPS=4525, BW=17.7MiB/s (18.5MB/s)(17.9MiB/1011msec); 0 zone resets 00:19:34.743 slat (usec): min=3, max=16818, avg=105.78, stdev=678.00 00:19:34.743 clat (usec): min=4025, max=57418, avg=14568.73, stdev=6411.03 00:19:34.743 lat (usec): min=4033, max=57424, avg=14674.51, stdev=6437.26 00:19:34.743 clat percentiles (usec): 00:19:34.743 | 1.00th=[ 4883], 5.00th=[ 7439], 10.00th=[10028], 20.00th=[11731], 00:19:34.743 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13435], 60.00th=[13829], 00:19:34.743 | 70.00th=[14615], 80.00th=[15533], 90.00th=[17171], 95.00th=[26870], 00:19:34.743 | 99.00th=[49021], 99.50th=[54264], 99.90th=[54789], 99.95th=[54789], 00:19:34.743 | 99.99th=[57410] 00:19:34.743 bw ( KiB/s): min=17248, max=18328, per=33.95%, avg=17788.00, stdev=763.68, samples=2 00:19:34.743 iops : min= 4312, max= 4582, avg=4447.00, stdev=190.92, samples=2 00:19:34.743 lat (msec) : 10=6.90%, 20=86.16%, 50=6.56%, 100=0.38% 00:19:34.743 cpu : usr=3.56%, sys=6.24%, ctx=350, majf=0, minf=17 00:19:34.743 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:34.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.743 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:34.743 issued rwts: total=4096,4575,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.743 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.743 job1: (groupid=0, jobs=1): err= 0: pid=4076993: Sat Jul 13 22:02:53 2024 00:19:34.743 read: IOPS=2364, BW=9459KiB/s (9686kB/s)(9960KiB/1053msec) 00:19:34.743 slat (usec): min=2, max=31104, avg=191.41, stdev=1363.63 00:19:34.743 clat (msec): min=6, max=117, avg=25.76, stdev=16.22 00:19:34.743 lat (msec): min=7, max=117, avg=25.95, stdev=16.30 00:19:34.743 clat percentiles (msec): 00:19:34.743 | 1.00th=[ 11], 5.00th=[ 14], 10.00th=[ 16], 20.00th=[ 18], 00:19:34.743 | 30.00th=[ 20], 40.00th=[ 21], 50.00th=[ 23], 60.00th=[ 24], 00:19:34.743 | 70.00th=[ 24], 80.00th=[ 27], 90.00th=[ 36], 95.00th=[ 69], 00:19:34.743 | 99.00th=[ 102], 99.50th=[ 102], 99.90th=[ 118], 99.95th=[ 118], 00:19:34.743 | 99.99th=[ 118] 00:19:34.743 write: IOPS=2431, BW=9725KiB/s (9958kB/s)(10.0MiB/1053msec); 0 zone resets 00:19:34.743 slat (usec): min=3, max=49299, avg=196.61, stdev=1394.67 00:19:34.743 clat (usec): min=6665, max=76338, avg=26984.67, stdev=15045.63 00:19:34.743 lat (usec): min=6677, max=76350, avg=27181.28, stdev=15130.82 00:19:34.743 clat percentiles (usec): 00:19:34.744 | 1.00th=[ 7111], 5.00th=[ 9634], 10.00th=[12387], 20.00th=[13829], 00:19:34.744 | 30.00th=[16909], 40.00th=[20841], 50.00th=[22152], 60.00th=[25822], 00:19:34.744 | 70.00th=[32900], 80.00th=[40633], 90.00th=[47449], 95.00th=[52167], 00:19:34.744 | 99.00th=[72877], 99.50th=[74974], 99.90th=[76022], 99.95th=[76022], 00:19:34.744 | 99.99th=[76022] 00:19:34.744 bw ( KiB/s): min= 9928, max=10552, per=19.55%, avg=10240.00, stdev=441.23, samples=2 00:19:34.744 iops : min= 2482, max= 2638, avg=2560.00, stdev=110.31, samples=2 00:19:34.744 lat (msec) : 10=3.70%, 20=33.49%, 50=56.48%, 100=4.97%, 250=1.37% 00:19:34.744 cpu : usr=1.43%, sys=2.85%, ctx=225, majf=0, minf=7 00:19:34.744 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:34.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:34.744 issued rwts: total=2490,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.744 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.744 job2: (groupid=0, jobs=1): err= 0: pid=4076994: Sat Jul 13 22:02:53 2024 00:19:34.744 read: IOPS=2071, BW=8287KiB/s (8485kB/s)(8328KiB/1005msec) 00:19:34.744 slat (usec): min=3, max=27028, avg=275.95, stdev=1589.92 00:19:34.744 clat (usec): min=4167, max=80430, avg=35253.98, stdev=14681.27 00:19:34.744 lat (usec): min=5015, max=80474, avg=35529.93, stdev=14825.11 00:19:34.744 clat percentiles (usec): 00:19:34.744 | 1.00th=[ 7046], 5.00th=[18744], 10.00th=[21890], 20.00th=[23725], 00:19:34.744 | 30.00th=[25560], 40.00th=[27657], 50.00th=[30278], 60.00th=[34341], 00:19:34.744 | 70.00th=[38536], 80.00th=[44827], 90.00th=[63701], 95.00th=[67634], 00:19:34.744 | 99.00th=[70779], 99.50th=[70779], 99.90th=[78119], 99.95th=[79168], 00:19:34.744 | 99.99th=[80217] 00:19:34.744 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:19:34.744 slat (usec): min=4, max=12559, avg=158.48, stdev=880.54 00:19:34.744 clat (usec): min=8770, max=54184, avg=20847.86, stdev=6153.16 00:19:34.744 lat (usec): min=8781, max=54196, avg=21006.34, stdev=6218.62 00:19:34.744 clat percentiles (usec): 00:19:34.744 | 1.00th=[10945], 5.00th=[14353], 10.00th=[14877], 20.00th=[15664], 00:19:34.744 | 30.00th=[16450], 40.00th=[19006], 50.00th=[19530], 60.00th=[20317], 00:19:34.744 | 70.00th=[21365], 80.00th=[26084], 90.00th=[29492], 95.00th=[32900], 00:19:34.744 | 99.00th=[36963], 99.50th=[40109], 99.90th=[52691], 99.95th=[54264], 00:19:34.744 | 99.99th=[54264] 00:19:34.744 bw ( KiB/s): min= 8288, max=11448, per=18.84%, avg=9868.00, stdev=2234.46, samples=2 00:19:34.744 iops : min= 2072, max= 2862, avg=2467.00, stdev=558.61, samples=2 00:19:34.744 lat (msec) : 10=0.99%, 20=30.81%, 50=60.00%, 100=8.21% 00:19:34.744 cpu : usr=2.79%, sys=3.78%, ctx=242, majf=0, minf=15 00:19:34.744 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:19:34.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:34.744 issued rwts: total=2082,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.744 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.744 job3: (groupid=0, jobs=1): err= 0: pid=4076995: Sat Jul 13 22:02:53 2024 00:19:34.744 read: IOPS=3739, BW=14.6MiB/s (15.3MB/s)(14.7MiB/1003msec) 00:19:34.744 slat (usec): min=2, max=22392, avg=130.23, stdev=840.60 00:19:34.744 clat (usec): min=1424, max=43054, avg=16193.38, stdev=5600.02 00:19:34.744 lat (usec): min=2777, max=43061, avg=16323.62, stdev=5637.68 00:19:34.744 clat percentiles (usec): 00:19:34.744 | 1.00th=[ 6194], 5.00th=[11863], 10.00th=[12780], 20.00th=[14222], 00:19:34.744 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15008], 60.00th=[15401], 00:19:34.744 | 70.00th=[15926], 80.00th=[16712], 90.00th=[18744], 95.00th=[21890], 00:19:34.744 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:19:34.744 | 99.99th=[43254] 00:19:34.744 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:19:34.744 slat (usec): min=4, max=23546, avg=117.09, stdev=754.88 00:19:34.744 clat (usec): min=7766, max=54497, avg=16105.54, stdev=5110.07 00:19:34.744 lat (usec): min=7996, max=54511, avg=16222.63, stdev=5143.91 00:19:34.744 clat percentiles (usec): 00:19:34.744 | 1.00th=[ 8979], 5.00th=[12256], 10.00th=[13435], 20.00th=[13960], 00:19:34.744 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14484], 60.00th=[15008], 00:19:34.744 | 70.00th=[15795], 80.00th=[17695], 90.00th=[19268], 95.00th=[22152], 00:19:34.744 | 99.00th=[43779], 99.50th=[50594], 99.90th=[51643], 99.95th=[51643], 00:19:34.744 | 99.99th=[54264] 00:19:34.744 bw ( KiB/s): min=16384, max=16384, per=31.27%, avg=16384.00, stdev= 0.00, samples=2 00:19:34.744 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:19:34.744 lat (msec) : 2=0.01%, 4=0.19%, 10=1.12%, 20=91.28%, 50=7.12% 00:19:34.744 lat (msec) : 100=0.27% 00:19:34.744 cpu : usr=4.59%, sys=5.99%, ctx=311, majf=0, minf=11 00:19:34.744 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:34.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:34.744 issued rwts: total=3751,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.744 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.744 00:19:34.744 Run status group 0 (all jobs): 00:19:34.744 READ: bw=46.1MiB/s (48.3MB/s), 8287KiB/s-15.8MiB/s (8485kB/s-16.6MB/s), io=48.5MiB (50.9MB), run=1003-1053msec 00:19:34.744 WRITE: bw=51.2MiB/s (53.6MB/s), 9725KiB/s-17.7MiB/s (9958kB/s-18.5MB/s), io=53.9MiB (56.5MB), run=1003-1053msec 00:19:34.744 00:19:34.744 Disk stats (read/write): 00:19:34.744 nvme0n1: ios=3611/3591, merge=0/0, ticks=24240/22369, in_queue=46609, util=97.39% 00:19:34.744 nvme0n2: ios=2070/2144, merge=0/0, ticks=26692/30224, in_queue=56916, util=97.36% 00:19:34.744 nvme0n3: ios=1850/2048, merge=0/0, ticks=21013/14185, in_queue=35198, util=97.60% 00:19:34.744 nvme0n4: ios=3136/3584, merge=0/0, ticks=22785/24683, in_queue=47468, util=89.57% 00:19:34.744 22:02:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:34.744 22:02:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=4077129 00:19:34.744 22:02:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:34.744 22:02:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:34.744 [global] 00:19:34.744 thread=1 00:19:34.744 invalidate=1 00:19:34.744 rw=read 00:19:34.744 time_based=1 00:19:34.744 runtime=10 00:19:34.744 ioengine=libaio 00:19:34.744 direct=1 00:19:34.744 bs=4096 00:19:34.744 iodepth=1 00:19:34.744 norandommap=1 00:19:34.744 numjobs=1 00:19:34.744 00:19:34.744 [job0] 00:19:34.744 filename=/dev/nvme0n1 00:19:34.744 [job1] 00:19:34.744 filename=/dev/nvme0n2 00:19:34.744 [job2] 00:19:34.744 filename=/dev/nvme0n3 00:19:34.744 [job3] 00:19:34.744 filename=/dev/nvme0n4 00:19:34.744 Could not set queue depth (nvme0n1) 00:19:34.744 Could not set queue depth (nvme0n2) 00:19:34.744 Could not set queue depth (nvme0n3) 00:19:34.744 Could not set queue depth (nvme0n4) 00:19:35.051 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:35.051 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:35.051 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:35.051 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:35.051 fio-3.35 00:19:35.051 Starting 4 threads 00:19:37.580 22:02:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:37.839 22:02:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:38.097 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=20967424, buflen=4096 00:19:38.097 fio: pid=4077226, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:38.097 22:02:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:38.097 22:02:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:38.355 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=10305536, buflen=4096 00:19:38.355 fio: pid=4077225, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:38.355 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=5619712, buflen=4096 00:19:38.355 fio: pid=4077223, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:38.613 22:02:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:38.613 22:02:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:38.872 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=13836288, buflen=4096 00:19:38.872 fio: pid=4077224, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:38.872 22:02:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:38.872 22:02:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:38.872 00:19:38.872 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4077223: Sat Jul 13 22:02:58 2024 00:19:38.872 read: IOPS=404, BW=1618KiB/s (1657kB/s)(5488KiB/3392msec) 00:19:38.872 slat (usec): min=4, max=29629, avg=70.37, stdev=1221.07 00:19:38.872 clat (usec): min=314, max=43067, avg=2383.16, stdev=8758.38 00:19:38.872 lat (usec): min=318, max=43082, avg=2453.57, stdev=8833.19 00:19:38.872 clat percentiles (usec): 00:19:38.872 | 1.00th=[ 322], 5.00th=[ 334], 10.00th=[ 343], 20.00th=[ 363], 00:19:38.872 | 30.00th=[ 379], 40.00th=[ 383], 50.00th=[ 400], 60.00th=[ 437], 00:19:38.872 | 70.00th=[ 461], 80.00th=[ 502], 90.00th=[ 515], 95.00th=[ 619], 00:19:38.872 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[43254], 00:19:38.872 | 99.99th=[43254] 00:19:38.872 bw ( KiB/s): min= 96, max= 4576, per=6.43%, avg=852.00, stdev=1824.44, samples=6 00:19:38.872 iops : min= 24, max= 1144, avg=213.00, stdev=456.11, samples=6 00:19:38.872 lat (usec) : 500=79.24%, 750=15.80%, 1000=0.07% 00:19:38.872 lat (msec) : 50=4.81% 00:19:38.872 cpu : usr=0.21%, sys=0.47%, ctx=1379, majf=0, minf=1 00:19:38.872 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:38.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.872 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.872 issued rwts: total=1373,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:38.872 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:38.872 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=4077224: Sat Jul 13 22:02:58 2024 00:19:38.872 read: IOPS=903, BW=3615KiB/s (3702kB/s)(13.2MiB/3738msec) 00:19:38.872 slat (usec): min=4, max=12841, avg=26.79, stdev=333.35 00:19:38.872 clat (usec): min=319, max=94397, avg=1075.26, stdev=5459.23 00:19:38.872 lat (usec): min=325, max=94410, avg=1099.57, stdev=5520.58 00:19:38.872 clat percentiles (usec): 00:19:38.872 | 1.00th=[ 334], 5.00th=[ 343], 10.00th=[ 347], 20.00th=[ 355], 00:19:38.872 | 30.00th=[ 363], 40.00th=[ 375], 50.00th=[ 388], 60.00th=[ 404], 00:19:38.872 | 70.00th=[ 433], 80.00th=[ 449], 90.00th=[ 482], 95.00th=[ 545], 00:19:38.872 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[89654], 00:19:38.872 | 99.99th=[94897] 00:19:38.872 bw ( KiB/s): min= 86, max=10376, per=29.09%, avg=3855.71, stdev=4799.00, samples=7 00:19:38.872 iops : min= 21, max= 2594, avg=963.86, stdev=1199.82, samples=7 00:19:38.872 lat (usec) : 500=90.65%, 750=7.69%, 1000=0.03% 00:19:38.872 lat (msec) : 2=0.03%, 50=1.48%, 100=0.09% 00:19:38.872 cpu : usr=0.99%, sys=1.77%, ctx=3381, majf=0, minf=1 00:19:38.872 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:38.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.872 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.872 issued rwts: total=3379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:38.872 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:38.872 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4077225: Sat Jul 13 22:02:58 2024 00:19:38.872 read: IOPS=794, BW=3175KiB/s (3251kB/s)(9.83MiB/3170msec) 00:19:38.872 slat (nsec): min=4390, max=69643, avg=14546.39, stdev=8415.57 00:19:38.872 clat (usec): min=333, max=42186, avg=1232.55, stdev=5735.60 00:19:38.872 lat (usec): min=339, max=42200, avg=1247.09, stdev=5736.80 00:19:38.872 clat percentiles (usec): 00:19:38.872 | 1.00th=[ 338], 5.00th=[ 351], 10.00th=[ 359], 20.00th=[ 379], 00:19:38.872 | 30.00th=[ 388], 40.00th=[ 396], 50.00th=[ 400], 60.00th=[ 404], 00:19:38.872 | 70.00th=[ 416], 80.00th=[ 445], 90.00th=[ 465], 95.00th=[ 498], 00:19:38.872 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:19:38.872 | 99.99th=[42206] 00:19:38.872 bw ( KiB/s): min= 120, max= 9344, per=25.27%, avg=3349.33, stdev=3846.28, samples=6 00:19:38.872 iops : min= 30, max= 2336, avg=837.33, stdev=961.57, samples=6 00:19:38.872 lat (usec) : 500=95.11%, 750=2.78%, 1000=0.04% 00:19:38.872 lat (msec) : 50=2.03% 00:19:38.872 cpu : usr=0.50%, sys=1.33%, ctx=2519, majf=0, minf=1 00:19:38.872 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:38.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.872 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.872 issued rwts: total=2517,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:38.872 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:38.872 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4077226: Sat Jul 13 22:02:58 2024 00:19:38.872 read: IOPS=1763, BW=7051KiB/s (7220kB/s)(20.0MiB/2904msec) 00:19:38.872 slat (nsec): min=4825, max=48993, avg=11602.80, stdev=5760.56 00:19:38.872 clat (usec): min=346, max=41219, avg=548.33, stdev=1497.08 00:19:38.872 lat (usec): min=355, max=41227, avg=559.94, stdev=1497.34 00:19:38.872 clat percentiles (usec): 00:19:38.872 | 1.00th=[ 383], 5.00th=[ 441], 10.00th=[ 453], 20.00th=[ 469], 00:19:38.872 | 30.00th=[ 478], 40.00th=[ 486], 50.00th=[ 490], 60.00th=[ 498], 00:19:38.872 | 70.00th=[ 510], 80.00th=[ 519], 90.00th=[ 537], 95.00th=[ 553], 00:19:38.872 | 99.00th=[ 594], 99.50th=[ 611], 99.90th=[40633], 99.95th=[41157], 00:19:38.872 | 99.99th=[41157] 00:19:38.872 bw ( KiB/s): min= 6560, max= 8184, per=57.57%, avg=7630.40, stdev=641.16, samples=5 00:19:38.872 iops : min= 1640, max= 2046, avg=1907.60, stdev=160.29, samples=5 00:19:38.872 lat (usec) : 500=62.54%, 750=37.30% 00:19:38.872 lat (msec) : 50=0.14% 00:19:38.872 cpu : usr=1.03%, sys=3.48%, ctx=5122, majf=0, minf=1 00:19:38.872 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:38.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.872 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.872 issued rwts: total=5120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:38.872 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:38.872 00:19:38.872 Run status group 0 (all jobs): 00:19:38.872 READ: bw=12.9MiB/s (13.6MB/s), 1618KiB/s-7051KiB/s (1657kB/s-7220kB/s), io=48.4MiB (50.7MB), run=2904-3738msec 00:19:38.872 00:19:38.872 Disk stats (read/write): 00:19:38.872 nvme0n1: ios=1340/0, merge=0/0, ticks=3244/0, in_queue=3244, util=93.68% 00:19:38.872 nvme0n2: ios=3376/0, merge=0/0, ticks=3394/0, in_queue=3394, util=95.98% 00:19:38.872 nvme0n3: ios=2514/0, merge=0/0, ticks=2937/0, in_queue=2937, util=96.79% 00:19:38.872 nvme0n4: ios=5161/0, merge=0/0, ticks=3131/0, in_queue=3131, util=99.29% 00:19:39.131 22:02:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:39.131 22:02:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:39.389 22:02:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:39.389 22:02:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:39.647 22:02:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:39.647 22:02:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:39.905 22:02:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:39.905 22:02:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:40.471 22:02:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:40.471 22:02:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 4077129 00:19:40.471 22:02:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:40.471 22:02:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:41.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:41.405 22:03:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:41.405 22:03:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:19:41.405 22:03:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:41.405 22:03:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:41.405 22:03:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:41.405 22:03:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:41.405 22:03:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:19:41.405 22:03:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:41.405 22:03:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:41.405 nvmf hotplug test: fio failed as expected 00:19:41.405 22:03:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:41.405 22:03:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:41.405 22:03:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:41.405 22:03:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:41.405 22:03:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:41.405 22:03:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:41.405 22:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:41.405 22:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:41.405 22:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:41.405 22:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:41.405 22:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:41.405 22:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:41.405 rmmod nvme_tcp 00:19:41.405 rmmod nvme_fabrics 00:19:41.405 rmmod nvme_keyring 00:19:41.405 22:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:41.663 22:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:41.663 22:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:41.663 22:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 4075080 ']' 00:19:41.663 22:03:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 4075080 00:19:41.663 22:03:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 4075080 ']' 00:19:41.663 22:03:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 4075080 00:19:41.663 22:03:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:19:41.663 22:03:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:41.663 22:03:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4075080 00:19:41.663 22:03:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:41.663 22:03:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:41.663 22:03:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4075080' 00:19:41.663 killing process with pid 4075080 00:19:41.663 22:03:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 4075080 00:19:41.663 22:03:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 4075080 00:19:43.039 22:03:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:43.039 22:03:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:43.039 22:03:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:43.039 22:03:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:43.039 22:03:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:43.039 22:03:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.039 22:03:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:43.039 22:03:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.939 22:03:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:44.939 00:19:44.939 real 0m26.338s 00:19:44.939 user 1m31.186s 00:19:44.939 sys 0m6.667s 00:19:44.939 22:03:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:44.939 22:03:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.939 ************************************ 00:19:44.939 END TEST nvmf_fio_target 00:19:44.939 ************************************ 00:19:44.939 22:03:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:44.939 22:03:04 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:44.939 22:03:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:44.939 22:03:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:44.939 22:03:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:44.939 ************************************ 00:19:44.939 START TEST nvmf_bdevio 00:19:44.939 ************************************ 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:44.939 * Looking for test storage... 00:19:44.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:44.939 22:03:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:46.841 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:46.841 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:46.841 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:46.841 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:46.841 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:47.099 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:47.099 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:47.099 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:47.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:47.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:19:47.099 00:19:47.099 --- 10.0.0.2 ping statistics --- 00:19:47.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.099 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:19:47.099 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:47.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:47.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:19:47.099 00:19:47.099 --- 10.0.0.1 ping statistics --- 00:19:47.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.099 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:19:47.099 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:47.099 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:47.099 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:47.099 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:47.099 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:47.099 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:47.099 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:47.099 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:47.099 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:47.099 22:03:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:47.099 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:47.099 22:03:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:47.099 22:03:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:47.099 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=4080222 00:19:47.099 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:47.099 22:03:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 4080222 00:19:47.099 22:03:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 4080222 ']' 00:19:47.099 22:03:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.099 22:03:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:47.099 22:03:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.099 22:03:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:47.099 22:03:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:47.099 [2024-07-13 22:03:06.378672] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:47.099 [2024-07-13 22:03:06.378806] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:47.099 EAL: No free 2048 kB hugepages reported on node 1 00:19:47.357 [2024-07-13 22:03:06.517439] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:47.615 [2024-07-13 22:03:06.764079] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.615 [2024-07-13 22:03:06.764144] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.615 [2024-07-13 22:03:06.764172] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:47.615 [2024-07-13 22:03:06.764193] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:47.615 [2024-07-13 22:03:06.764215] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.615 [2024-07-13 22:03:06.764339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:47.615 [2024-07-13 22:03:06.764545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:47.615 [2024-07-13 22:03:06.764630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:47.615 [2024-07-13 22:03:06.764663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:48.182 [2024-07-13 22:03:07.343378] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:48.182 Malloc0 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:48.182 [2024-07-13 22:03:07.447790] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:48.182 { 00:19:48.182 "params": { 00:19:48.182 "name": "Nvme$subsystem", 00:19:48.182 "trtype": "$TEST_TRANSPORT", 00:19:48.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.182 "adrfam": "ipv4", 00:19:48.182 "trsvcid": "$NVMF_PORT", 00:19:48.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.182 "hdgst": ${hdgst:-false}, 00:19:48.182 "ddgst": ${ddgst:-false} 00:19:48.182 }, 00:19:48.182 "method": "bdev_nvme_attach_controller" 00:19:48.182 } 00:19:48.182 EOF 00:19:48.182 )") 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:48.182 22:03:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:48.182 "params": { 00:19:48.182 "name": "Nvme1", 00:19:48.182 "trtype": "tcp", 00:19:48.182 "traddr": "10.0.0.2", 00:19:48.182 "adrfam": "ipv4", 00:19:48.182 "trsvcid": "4420", 00:19:48.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:48.182 "hdgst": false, 00:19:48.182 "ddgst": false 00:19:48.182 }, 00:19:48.182 "method": "bdev_nvme_attach_controller" 00:19:48.182 }' 00:19:48.182 [2024-07-13 22:03:07.531288] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:48.182 [2024-07-13 22:03:07.531425] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4080589 ] 00:19:48.441 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.441 [2024-07-13 22:03:07.661560] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:48.699 [2024-07-13 22:03:07.906468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.699 [2024-07-13 22:03:07.906512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.699 [2024-07-13 22:03:07.906521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.264 I/O targets: 00:19:49.264 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:49.264 00:19:49.264 00:19:49.264 CUnit - A unit testing framework for C - Version 2.1-3 00:19:49.264 http://cunit.sourceforge.net/ 00:19:49.264 00:19:49.264 00:19:49.264 Suite: bdevio tests on: Nvme1n1 00:19:49.264 Test: blockdev write read block ...passed 00:19:49.264 Test: blockdev write zeroes read block ...passed 00:19:49.264 Test: blockdev write zeroes read no split ...passed 00:19:49.264 Test: blockdev write zeroes read split ...passed 00:19:49.521 Test: blockdev write zeroes read split partial ...passed 00:19:49.521 Test: blockdev reset ...[2024-07-13 22:03:08.732849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:49.521 [2024-07-13 22:03:08.733038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2f00 (9): Bad file descriptor 00:19:49.521 [2024-07-13 22:03:08.792650] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:49.521 passed 00:19:49.521 Test: blockdev write read 8 blocks ...passed 00:19:49.521 Test: blockdev write read size > 128k ...passed 00:19:49.521 Test: blockdev write read invalid size ...passed 00:19:49.521 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:49.521 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:49.521 Test: blockdev write read max offset ...passed 00:19:49.779 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:49.779 Test: blockdev writev readv 8 blocks ...passed 00:19:49.779 Test: blockdev writev readv 30 x 1block ...passed 00:19:49.779 Test: blockdev writev readv block ...passed 00:19:49.779 Test: blockdev writev readv size > 128k ...passed 00:19:49.779 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:49.779 Test: blockdev comparev and writev ...[2024-07-13 22:03:09.016738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:49.779 [2024-07-13 22:03:09.016813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:49.779 [2024-07-13 22:03:09.016861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:49.779 [2024-07-13 22:03:09.016898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:49.779 [2024-07-13 22:03:09.017445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:49.779 [2024-07-13 22:03:09.017479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:49.779 [2024-07-13 22:03:09.017512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:49.779 [2024-07-13 22:03:09.017538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:49.779 [2024-07-13 22:03:09.018095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:49.779 [2024-07-13 22:03:09.018128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:49.779 [2024-07-13 22:03:09.018167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:49.779 [2024-07-13 22:03:09.018192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:49.779 [2024-07-13 22:03:09.018696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:49.779 [2024-07-13 22:03:09.018727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:49.779 [2024-07-13 22:03:09.018760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:49.779 [2024-07-13 22:03:09.018785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:49.779 passed 00:19:49.779 Test: blockdev nvme passthru rw ...passed 00:19:49.779 Test: blockdev nvme passthru vendor specific ...[2024-07-13 22:03:09.103417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:49.779 [2024-07-13 22:03:09.103471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:49.779 [2024-07-13 22:03:09.103781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:49.779 [2024-07-13 22:03:09.103812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:49.779 [2024-07-13 22:03:09.104129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:49.779 [2024-07-13 22:03:09.104165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:49.779 [2024-07-13 22:03:09.104461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:49.779 [2024-07-13 22:03:09.104492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:49.779 passed 00:19:49.779 Test: blockdev nvme admin passthru ...passed 00:19:49.779 Test: blockdev copy ...passed 00:19:49.779 00:19:49.779 Run Summary: Type Total Ran Passed Failed Inactive 00:19:49.779 suites 1 1 n/a 0 0 00:19:49.779 tests 23 23 23 0 0 00:19:49.779 asserts 152 152 152 0 n/a 00:19:49.779 00:19:49.779 Elapsed time = 1.421 seconds 00:19:51.192 22:03:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:51.192 22:03:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.192 22:03:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:51.193 22:03:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.193 22:03:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:51.193 22:03:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:51.193 22:03:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:51.193 22:03:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:51.193 22:03:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:51.193 22:03:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:51.193 22:03:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:51.193 22:03:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:51.193 rmmod nvme_tcp 00:19:51.193 rmmod nvme_fabrics 00:19:51.193 rmmod nvme_keyring 00:19:51.193 22:03:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:51.193 22:03:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:51.193 22:03:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:51.193 22:03:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 4080222 ']' 00:19:51.193 22:03:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 4080222 00:19:51.193 22:03:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 4080222 ']' 00:19:51.193 22:03:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 4080222 00:19:51.193 22:03:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:19:51.193 22:03:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:51.193 22:03:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4080222 00:19:51.193 22:03:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:19:51.193 22:03:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:19:51.193 22:03:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4080222' 00:19:51.193 killing process with pid 4080222 00:19:51.193 22:03:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 4080222 00:19:51.193 22:03:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 4080222 00:19:52.566 22:03:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:52.566 22:03:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:52.566 22:03:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:52.566 22:03:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:52.566 22:03:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:52.566 22:03:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.566 22:03:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.566 22:03:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.470 22:03:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:54.470 00:19:54.470 real 0m9.480s 00:19:54.470 user 0m23.757s 00:19:54.470 sys 0m2.351s 00:19:54.470 22:03:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:54.470 22:03:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:54.470 ************************************ 00:19:54.470 END TEST nvmf_bdevio 00:19:54.470 ************************************ 00:19:54.470 22:03:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:54.470 22:03:13 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:54.470 22:03:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:54.470 22:03:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:54.470 22:03:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:54.470 ************************************ 00:19:54.470 START TEST nvmf_auth_target 00:19:54.470 ************************************ 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:54.470 * Looking for test storage... 00:19:54.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:54.470 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:54.471 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.471 22:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:54.471 22:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.471 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:54.471 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:54.471 22:03:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:54.471 22:03:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.370 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:56.370 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:56.370 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:56.370 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:56.370 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:56.370 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:56.370 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:56.370 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:56.370 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:56.370 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:56.370 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:56.370 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:56.370 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:56.370 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:56.370 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:56.370 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:56.371 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:56.371 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:56.371 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:56.371 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:56.371 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:56.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:56.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:19:56.630 00:19:56.630 --- 10.0.0.2 ping statistics --- 00:19:56.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.630 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:56.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:56.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:19:56.630 00:19:56.630 --- 10.0.0.1 ping statistics --- 00:19:56.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.630 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=4083218 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 4083218 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4083218 ']' 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:56.630 22:03:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.566 22:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:57.566 22:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:57.566 22:03:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:57.566 22:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:57.566 22:03:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.825 22:03:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.825 22:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=4083369 00:19:57.825 22:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:57.825 22:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:57.825 22:03:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:57.825 22:03:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:57.825 22:03:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:57.825 22:03:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:57.825 22:03:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:57.826 22:03:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:57.826 22:03:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:57.826 22:03:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=89273ba2ebf3faf15bf3594d1540852a2d448c342926bbbe 00:19:57.826 22:03:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:57.826 22:03:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.SXl 00:19:57.826 22:03:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 89273ba2ebf3faf15bf3594d1540852a2d448c342926bbbe 0 00:19:57.826 22:03:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 89273ba2ebf3faf15bf3594d1540852a2d448c342926bbbe 0 00:19:57.826 22:03:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:57.826 22:03:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:57.826 22:03:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=89273ba2ebf3faf15bf3594d1540852a2d448c342926bbbe 00:19:57.826 22:03:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:57.826 22:03:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.SXl 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.SXl 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.SXl 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6189edbbd0c432aba6182d64421ae0d274ce5d4732ee7a8993dca757964ecaa7 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.u78 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6189edbbd0c432aba6182d64421ae0d274ce5d4732ee7a8993dca757964ecaa7 3 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6189edbbd0c432aba6182d64421ae0d274ce5d4732ee7a8993dca757964ecaa7 3 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6189edbbd0c432aba6182d64421ae0d274ce5d4732ee7a8993dca757964ecaa7 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.u78 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.u78 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.u78 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=95dbd7382d5d5da9a1b0f4eebe0d840b 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.NQa 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 95dbd7382d5d5da9a1b0f4eebe0d840b 1 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 95dbd7382d5d5da9a1b0f4eebe0d840b 1 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=95dbd7382d5d5da9a1b0f4eebe0d840b 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.NQa 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.NQa 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.NQa 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=be7351bf19ef525f20b1b197b96703fd03df25df1eef0ad4 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ySA 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key be7351bf19ef525f20b1b197b96703fd03df25df1eef0ad4 2 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 be7351bf19ef525f20b1b197b96703fd03df25df1eef0ad4 2 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=be7351bf19ef525f20b1b197b96703fd03df25df1eef0ad4 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ySA 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ySA 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.ySA 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3a54b3171a3cdca4fcbbdc95c11a199d7ccd40f729fa83bb 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.eZA 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3a54b3171a3cdca4fcbbdc95c11a199d7ccd40f729fa83bb 2 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3a54b3171a3cdca4fcbbdc95c11a199d7ccd40f729fa83bb 2 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3a54b3171a3cdca4fcbbdc95c11a199d7ccd40f729fa83bb 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:57.826 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.eZA 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.eZA 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.eZA 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=69c890dfd385a5e92d5848dbcefdbd8e 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.6S5 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 69c890dfd385a5e92d5848dbcefdbd8e 1 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 69c890dfd385a5e92d5848dbcefdbd8e 1 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=69c890dfd385a5e92d5848dbcefdbd8e 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.6S5 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.6S5 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.6S5 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0b007209f85febdc5042836303ea4d2b8435c47e9ec710dff9971859f8f6dd59 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Pok 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0b007209f85febdc5042836303ea4d2b8435c47e9ec710dff9971859f8f6dd59 3 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0b007209f85febdc5042836303ea4d2b8435c47e9ec710dff9971859f8f6dd59 3 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0b007209f85febdc5042836303ea4d2b8435c47e9ec710dff9971859f8f6dd59 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Pok 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Pok 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Pok 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 4083218 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4083218 ']' 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:58.086 22:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.345 22:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:58.345 22:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:58.345 22:03:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 4083369 /var/tmp/host.sock 00:19:58.345 22:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4083369 ']' 00:19:58.345 22:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:19:58.345 22:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:58.345 22:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:58.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:58.345 22:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:58.345 22:03:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.912 22:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:58.912 22:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:58.912 22:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:58.912 22:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.912 22:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.171 22:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.171 22:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:59.171 22:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.SXl 00:19:59.171 22:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.171 22:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.171 22:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.171 22:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.SXl 00:19:59.171 22:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.SXl 00:19:59.429 22:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.u78 ]] 00:19:59.429 22:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.u78 00:19:59.429 22:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.429 22:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.429 22:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.430 22:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.u78 00:19:59.430 22:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.u78 00:19:59.688 22:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:59.688 22:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.NQa 00:19:59.688 22:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.688 22:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.688 22:03:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.688 22:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.NQa 00:19:59.688 22:03:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.NQa 00:19:59.946 22:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.ySA ]] 00:19:59.947 22:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ySA 00:19:59.947 22:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.947 22:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.947 22:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.947 22:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ySA 00:19:59.947 22:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ySA 00:20:00.205 22:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:00.205 22:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.eZA 00:20:00.205 22:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.205 22:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.205 22:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.205 22:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.eZA 00:20:00.205 22:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.eZA 00:20:00.463 22:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.6S5 ]] 00:20:00.463 22:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6S5 00:20:00.463 22:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.463 22:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.463 22:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.463 22:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6S5 00:20:00.463 22:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6S5 00:20:00.721 22:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:20:00.721 22:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Pok 00:20:00.721 22:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.721 22:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.721 22:03:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.721 22:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Pok 00:20:00.721 22:03:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Pok 00:20:00.980 22:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:20:00.980 22:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:00.980 22:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:00.980 22:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.980 22:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:00.980 22:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:01.238 22:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:20:01.238 22:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.238 22:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:01.238 22:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:01.238 22:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:01.238 22:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.238 22:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.238 22:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.238 22:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.238 22:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.238 22:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.238 22:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.496 00:20:01.496 22:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.496 22:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.496 22:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.755 22:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.755 22:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.755 22:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.755 22:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.755 22:03:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.755 22:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.755 { 00:20:01.755 "cntlid": 1, 00:20:01.755 "qid": 0, 00:20:01.755 "state": "enabled", 00:20:01.755 "thread": "nvmf_tgt_poll_group_000", 00:20:01.755 "listen_address": { 00:20:01.755 "trtype": "TCP", 00:20:01.755 "adrfam": "IPv4", 00:20:01.755 "traddr": "10.0.0.2", 00:20:01.755 "trsvcid": "4420" 00:20:01.755 }, 00:20:01.755 "peer_address": { 00:20:01.755 "trtype": "TCP", 00:20:01.755 "adrfam": "IPv4", 00:20:01.755 "traddr": "10.0.0.1", 00:20:01.755 "trsvcid": "40328" 00:20:01.755 }, 00:20:01.755 "auth": { 00:20:01.755 "state": "completed", 00:20:01.755 "digest": "sha256", 00:20:01.755 "dhgroup": "null" 00:20:01.755 } 00:20:01.755 } 00:20:01.755 ]' 00:20:01.755 22:03:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.755 22:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:01.755 22:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.755 22:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:01.755 22:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.755 22:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.755 22:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.755 22:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.014 22:03:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkyNzNiYTJlYmYzZmFmMTViZjM1OTRkMTU0MDg1MmEyZDQ0OGMzNDI5MjZiYmJlkzD1iw==: --dhchap-ctrl-secret DHHC-1:03:NjE4OWVkYmJkMGM0MzJhYmE2MTgyZDY0NDIxYWUwZDI3NGNlNWQ0NzMyZWU3YTg5OTNkY2E3NTc5NjRlY2FhN8pEGEs=: 00:20:02.948 22:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.948 22:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.948 22:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.948 22:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.948 22:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.948 22:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.948 22:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:02.948 22:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:03.207 22:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:20:03.207 22:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.207 22:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:03.207 22:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:03.207 22:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:03.207 22:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.207 22:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.207 22:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.207 22:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.207 22:03:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.207 22:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.207 22:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.465 00:20:03.465 22:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.465 22:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.465 22:03:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.724 22:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.724 22:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.724 22:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.724 22:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.724 22:03:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.724 22:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.724 { 00:20:03.724 "cntlid": 3, 00:20:03.724 "qid": 0, 00:20:03.724 "state": "enabled", 00:20:03.724 "thread": "nvmf_tgt_poll_group_000", 00:20:03.724 "listen_address": { 00:20:03.724 "trtype": "TCP", 00:20:03.724 "adrfam": "IPv4", 00:20:03.724 "traddr": "10.0.0.2", 00:20:03.724 "trsvcid": "4420" 00:20:03.724 }, 00:20:03.724 "peer_address": { 00:20:03.724 "trtype": "TCP", 00:20:03.724 "adrfam": "IPv4", 00:20:03.724 "traddr": "10.0.0.1", 00:20:03.724 "trsvcid": "40366" 00:20:03.724 }, 00:20:03.724 "auth": { 00:20:03.724 "state": "completed", 00:20:03.724 "digest": "sha256", 00:20:03.724 "dhgroup": "null" 00:20:03.724 } 00:20:03.724 } 00:20:03.724 ]' 00:20:03.724 22:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.984 22:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.984 22:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.984 22:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:03.984 22:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.984 22:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.984 22:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.984 22:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.241 22:03:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTVkYmQ3MzgyZDVkNWRhOWExYjBmNGVlYmUwZDg0MGJHHEcY: --dhchap-ctrl-secret DHHC-1:02:YmU3MzUxYmYxOWVmNTI1ZjIwYjFiMTk3Yjk2NzAzZmQwM2RmMjVkZjFlZWYwYWQ0E3C4iA==: 00:20:05.173 22:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.173 22:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.173 22:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.173 22:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.173 22:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.173 22:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.173 22:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:05.173 22:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:05.462 22:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:20:05.462 22:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.462 22:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:05.462 22:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:05.462 22:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:05.462 22:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.462 22:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.462 22:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.462 22:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.462 22:03:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.462 22:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.462 22:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.720 00:20:05.720 22:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.720 22:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.720 22:03:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.979 22:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.979 22:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.979 22:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.979 22:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.979 22:03:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.979 22:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.979 { 00:20:05.979 "cntlid": 5, 00:20:05.979 "qid": 0, 00:20:05.979 "state": "enabled", 00:20:05.979 "thread": "nvmf_tgt_poll_group_000", 00:20:05.979 "listen_address": { 00:20:05.979 "trtype": "TCP", 00:20:05.979 "adrfam": "IPv4", 00:20:05.979 "traddr": "10.0.0.2", 00:20:05.979 "trsvcid": "4420" 00:20:05.979 }, 00:20:05.979 "peer_address": { 00:20:05.979 "trtype": "TCP", 00:20:05.979 "adrfam": "IPv4", 00:20:05.979 "traddr": "10.0.0.1", 00:20:05.979 "trsvcid": "40396" 00:20:05.979 }, 00:20:05.979 "auth": { 00:20:05.979 "state": "completed", 00:20:05.979 "digest": "sha256", 00:20:05.979 "dhgroup": "null" 00:20:05.979 } 00:20:05.979 } 00:20:05.979 ]' 00:20:05.979 22:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.979 22:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.979 22:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.979 22:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:05.979 22:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.979 22:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.979 22:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.979 22:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.238 22:03:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2E1NGIzMTcxYTNjZGNhNGZjYmJkYzk1YzExYTE5OWQ3Y2NkNDBmNzI5ZmE4M2JiS38vHQ==: --dhchap-ctrl-secret DHHC-1:01:NjljODkwZGZkMzg1YTVlOTJkNTg0OGRiY2VmZGJkOGVsy0VT: 00:20:07.611 22:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.611 22:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.611 22:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.611 22:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.611 22:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.611 22:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.611 22:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:07.611 22:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:07.611 22:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:20:07.611 22:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.611 22:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:07.611 22:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:07.611 22:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:07.611 22:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.611 22:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:07.611 22:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.611 22:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.611 22:03:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.611 22:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:07.611 22:03:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:07.869 00:20:07.869 22:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.869 22:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.869 22:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.127 22:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.127 22:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.127 22:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.127 22:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.127 22:03:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.127 22:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.127 { 00:20:08.127 "cntlid": 7, 00:20:08.127 "qid": 0, 00:20:08.127 "state": "enabled", 00:20:08.127 "thread": "nvmf_tgt_poll_group_000", 00:20:08.127 "listen_address": { 00:20:08.127 "trtype": "TCP", 00:20:08.127 "adrfam": "IPv4", 00:20:08.127 "traddr": "10.0.0.2", 00:20:08.127 "trsvcid": "4420" 00:20:08.127 }, 00:20:08.127 "peer_address": { 00:20:08.127 "trtype": "TCP", 00:20:08.127 "adrfam": "IPv4", 00:20:08.127 "traddr": "10.0.0.1", 00:20:08.127 "trsvcid": "40428" 00:20:08.127 }, 00:20:08.127 "auth": { 00:20:08.127 "state": "completed", 00:20:08.127 "digest": "sha256", 00:20:08.127 "dhgroup": "null" 00:20:08.127 } 00:20:08.127 } 00:20:08.127 ]' 00:20:08.127 22:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.385 22:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.385 22:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.385 22:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:08.385 22:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.385 22:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.385 22:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.385 22:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.642 22:03:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGIwMDcyMDlmODVmZWJkYzUwNDI4MzYzMDNlYTRkMmI4NDM1YzQ3ZTllYzcxMGRmZjk5NzE4NTlmOGY2ZGQ1Odfg3ic=: 00:20:09.576 22:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.576 22:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.576 22:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.576 22:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.576 22:03:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.576 22:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:09.576 22:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.576 22:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:09.576 22:03:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:09.834 22:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:20:09.834 22:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.834 22:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:09.834 22:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:09.834 22:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:09.834 22:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.834 22:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.834 22:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.834 22:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.834 22:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.834 22:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.834 22:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.092 00:20:10.092 22:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.092 22:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.092 22:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.350 22:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.350 22:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.350 22:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.350 22:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.350 22:03:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.350 22:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.350 { 00:20:10.350 "cntlid": 9, 00:20:10.350 "qid": 0, 00:20:10.350 "state": "enabled", 00:20:10.350 "thread": "nvmf_tgt_poll_group_000", 00:20:10.350 "listen_address": { 00:20:10.350 "trtype": "TCP", 00:20:10.350 "adrfam": "IPv4", 00:20:10.350 "traddr": "10.0.0.2", 00:20:10.350 "trsvcid": "4420" 00:20:10.350 }, 00:20:10.350 "peer_address": { 00:20:10.350 "trtype": "TCP", 00:20:10.350 "adrfam": "IPv4", 00:20:10.350 "traddr": "10.0.0.1", 00:20:10.350 "trsvcid": "40766" 00:20:10.350 }, 00:20:10.350 "auth": { 00:20:10.350 "state": "completed", 00:20:10.350 "digest": "sha256", 00:20:10.350 "dhgroup": "ffdhe2048" 00:20:10.350 } 00:20:10.350 } 00:20:10.350 ]' 00:20:10.350 22:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.350 22:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:10.350 22:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.608 22:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:10.608 22:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.608 22:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.608 22:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.608 22:03:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.866 22:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkyNzNiYTJlYmYzZmFmMTViZjM1OTRkMTU0MDg1MmEyZDQ0OGMzNDI5MjZiYmJlkzD1iw==: --dhchap-ctrl-secret DHHC-1:03:NjE4OWVkYmJkMGM0MzJhYmE2MTgyZDY0NDIxYWUwZDI3NGNlNWQ0NzMyZWU3YTg5OTNkY2E3NTc5NjRlY2FhN8pEGEs=: 00:20:11.798 22:03:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.798 22:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.798 22:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.798 22:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.798 22:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.798 22:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.798 22:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:11.798 22:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:12.055 22:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:20:12.055 22:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.055 22:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:12.055 22:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:12.055 22:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:12.055 22:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.055 22:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.055 22:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.055 22:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.055 22:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.055 22:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.055 22:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:12.312 00:20:12.312 22:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.312 22:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.312 22:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.570 22:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.570 22:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.570 22:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.570 22:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.570 22:03:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.570 22:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.570 { 00:20:12.570 "cntlid": 11, 00:20:12.570 "qid": 0, 00:20:12.570 "state": "enabled", 00:20:12.570 "thread": "nvmf_tgt_poll_group_000", 00:20:12.570 "listen_address": { 00:20:12.570 "trtype": "TCP", 00:20:12.570 "adrfam": "IPv4", 00:20:12.570 "traddr": "10.0.0.2", 00:20:12.570 "trsvcid": "4420" 00:20:12.570 }, 00:20:12.570 "peer_address": { 00:20:12.570 "trtype": "TCP", 00:20:12.570 "adrfam": "IPv4", 00:20:12.570 "traddr": "10.0.0.1", 00:20:12.570 "trsvcid": "40790" 00:20:12.570 }, 00:20:12.570 "auth": { 00:20:12.570 "state": "completed", 00:20:12.570 "digest": "sha256", 00:20:12.570 "dhgroup": "ffdhe2048" 00:20:12.570 } 00:20:12.570 } 00:20:12.570 ]' 00:20:12.570 22:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.570 22:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:12.570 22:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.570 22:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:12.570 22:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.828 22:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.828 22:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.828 22:03:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.828 22:03:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTVkYmQ3MzgyZDVkNWRhOWExYjBmNGVlYmUwZDg0MGJHHEcY: --dhchap-ctrl-secret DHHC-1:02:YmU3MzUxYmYxOWVmNTI1ZjIwYjFiMTk3Yjk2NzAzZmQwM2RmMjVkZjFlZWYwYWQ0E3C4iA==: 00:20:14.200 22:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.200 22:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.200 22:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.200 22:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.200 22:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.200 22:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.200 22:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:14.200 22:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:14.200 22:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:20:14.200 22:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.200 22:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:14.200 22:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:14.200 22:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:14.200 22:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.200 22:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.200 22:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.200 22:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.200 22:03:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.200 22:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.200 22:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.456 00:20:14.456 22:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.456 22:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.456 22:03:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.713 22:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.713 22:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.713 22:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.713 22:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.713 22:03:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.713 22:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.713 { 00:20:14.713 "cntlid": 13, 00:20:14.713 "qid": 0, 00:20:14.713 "state": "enabled", 00:20:14.713 "thread": "nvmf_tgt_poll_group_000", 00:20:14.713 "listen_address": { 00:20:14.713 "trtype": "TCP", 00:20:14.713 "adrfam": "IPv4", 00:20:14.713 "traddr": "10.0.0.2", 00:20:14.713 "trsvcid": "4420" 00:20:14.713 }, 00:20:14.713 "peer_address": { 00:20:14.713 "trtype": "TCP", 00:20:14.713 "adrfam": "IPv4", 00:20:14.713 "traddr": "10.0.0.1", 00:20:14.713 "trsvcid": "40816" 00:20:14.713 }, 00:20:14.713 "auth": { 00:20:14.713 "state": "completed", 00:20:14.713 "digest": "sha256", 00:20:14.713 "dhgroup": "ffdhe2048" 00:20:14.713 } 00:20:14.713 } 00:20:14.713 ]' 00:20:14.713 22:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.713 22:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.713 22:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.970 22:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:14.970 22:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.970 22:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.970 22:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.970 22:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.228 22:03:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2E1NGIzMTcxYTNjZGNhNGZjYmJkYzk1YzExYTE5OWQ3Y2NkNDBmNzI5ZmE4M2JiS38vHQ==: --dhchap-ctrl-secret DHHC-1:01:NjljODkwZGZkMzg1YTVlOTJkNTg0OGRiY2VmZGJkOGVsy0VT: 00:20:16.164 22:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.164 22:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.164 22:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.164 22:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.164 22:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.164 22:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.164 22:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:16.164 22:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:16.422 22:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:20:16.422 22:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.422 22:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:16.422 22:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:16.422 22:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:16.422 22:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.422 22:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:16.422 22:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.422 22:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.422 22:03:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.422 22:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:16.422 22:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:16.680 00:20:16.680 22:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.680 22:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.680 22:03:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.938 22:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.938 22:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.938 22:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.938 22:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.938 22:03:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.938 22:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.938 { 00:20:16.938 "cntlid": 15, 00:20:16.938 "qid": 0, 00:20:16.938 "state": "enabled", 00:20:16.939 "thread": "nvmf_tgt_poll_group_000", 00:20:16.939 "listen_address": { 00:20:16.939 "trtype": "TCP", 00:20:16.939 "adrfam": "IPv4", 00:20:16.939 "traddr": "10.0.0.2", 00:20:16.939 "trsvcid": "4420" 00:20:16.939 }, 00:20:16.939 "peer_address": { 00:20:16.939 "trtype": "TCP", 00:20:16.939 "adrfam": "IPv4", 00:20:16.939 "traddr": "10.0.0.1", 00:20:16.939 "trsvcid": "40852" 00:20:16.939 }, 00:20:16.939 "auth": { 00:20:16.939 "state": "completed", 00:20:16.939 "digest": "sha256", 00:20:16.939 "dhgroup": "ffdhe2048" 00:20:16.939 } 00:20:16.939 } 00:20:16.939 ]' 00:20:16.939 22:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.939 22:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.939 22:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.939 22:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:16.939 22:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.196 22:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.196 22:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.197 22:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.454 22:03:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGIwMDcyMDlmODVmZWJkYzUwNDI4MzYzMDNlYTRkMmI4NDM1YzQ3ZTllYzcxMGRmZjk5NzE4NTlmOGY2ZGQ1Odfg3ic=: 00:20:18.388 22:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.388 22:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.388 22:03:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.388 22:03:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.388 22:03:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.388 22:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.388 22:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.388 22:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:18.388 22:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:18.646 22:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:20:18.646 22:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.646 22:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:18.647 22:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:18.647 22:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:18.647 22:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.647 22:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.647 22:03:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.647 22:03:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.647 22:03:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.647 22:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.647 22:03:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.905 00:20:18.905 22:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.905 22:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.905 22:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.164 22:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.164 22:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.164 22:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.164 22:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.164 22:03:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.164 22:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.164 { 00:20:19.164 "cntlid": 17, 00:20:19.164 "qid": 0, 00:20:19.164 "state": "enabled", 00:20:19.164 "thread": "nvmf_tgt_poll_group_000", 00:20:19.164 "listen_address": { 00:20:19.164 "trtype": "TCP", 00:20:19.164 "adrfam": "IPv4", 00:20:19.164 "traddr": "10.0.0.2", 00:20:19.164 "trsvcid": "4420" 00:20:19.164 }, 00:20:19.164 "peer_address": { 00:20:19.164 "trtype": "TCP", 00:20:19.164 "adrfam": "IPv4", 00:20:19.164 "traddr": "10.0.0.1", 00:20:19.164 "trsvcid": "55534" 00:20:19.164 }, 00:20:19.164 "auth": { 00:20:19.164 "state": "completed", 00:20:19.164 "digest": "sha256", 00:20:19.164 "dhgroup": "ffdhe3072" 00:20:19.164 } 00:20:19.164 } 00:20:19.164 ]' 00:20:19.164 22:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.451 22:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.451 22:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.451 22:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:19.451 22:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.451 22:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.451 22:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.451 22:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.709 22:03:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkyNzNiYTJlYmYzZmFmMTViZjM1OTRkMTU0MDg1MmEyZDQ0OGMzNDI5MjZiYmJlkzD1iw==: --dhchap-ctrl-secret DHHC-1:03:NjE4OWVkYmJkMGM0MzJhYmE2MTgyZDY0NDIxYWUwZDI3NGNlNWQ0NzMyZWU3YTg5OTNkY2E3NTc5NjRlY2FhN8pEGEs=: 00:20:20.643 22:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.643 22:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.643 22:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.643 22:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.643 22:03:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.643 22:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.643 22:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:20.643 22:03:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:20.901 22:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:20:20.901 22:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.901 22:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:20.901 22:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:20.901 22:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:20.901 22:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.901 22:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.901 22:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.901 22:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.901 22:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.901 22:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:20.901 22:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.159 00:20:21.159 22:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.159 22:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.159 22:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.417 22:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.417 22:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.417 22:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.417 22:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.417 22:03:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.417 22:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.417 { 00:20:21.417 "cntlid": 19, 00:20:21.417 "qid": 0, 00:20:21.417 "state": "enabled", 00:20:21.417 "thread": "nvmf_tgt_poll_group_000", 00:20:21.417 "listen_address": { 00:20:21.417 "trtype": "TCP", 00:20:21.417 "adrfam": "IPv4", 00:20:21.417 "traddr": "10.0.0.2", 00:20:21.417 "trsvcid": "4420" 00:20:21.417 }, 00:20:21.417 "peer_address": { 00:20:21.417 "trtype": "TCP", 00:20:21.417 "adrfam": "IPv4", 00:20:21.417 "traddr": "10.0.0.1", 00:20:21.417 "trsvcid": "55572" 00:20:21.417 }, 00:20:21.417 "auth": { 00:20:21.417 "state": "completed", 00:20:21.417 "digest": "sha256", 00:20:21.417 "dhgroup": "ffdhe3072" 00:20:21.417 } 00:20:21.417 } 00:20:21.417 ]' 00:20:21.417 22:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.417 22:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:21.417 22:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.675 22:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:21.675 22:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.675 22:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.675 22:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.675 22:03:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.933 22:03:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTVkYmQ3MzgyZDVkNWRhOWExYjBmNGVlYmUwZDg0MGJHHEcY: --dhchap-ctrl-secret DHHC-1:02:YmU3MzUxYmYxOWVmNTI1ZjIwYjFiMTk3Yjk2NzAzZmQwM2RmMjVkZjFlZWYwYWQ0E3C4iA==: 00:20:22.865 22:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.865 22:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.865 22:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.865 22:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.865 22:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.865 22:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.865 22:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:22.865 22:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:23.123 22:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:20:23.123 22:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.123 22:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:23.123 22:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:23.123 22:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:23.123 22:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.123 22:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.123 22:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.123 22:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.123 22:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.123 22:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.123 22:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.381 00:20:23.381 22:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.381 22:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.381 22:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.639 22:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.639 22:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.639 22:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.639 22:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.639 22:03:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.639 22:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.639 { 00:20:23.639 "cntlid": 21, 00:20:23.639 "qid": 0, 00:20:23.639 "state": "enabled", 00:20:23.639 "thread": "nvmf_tgt_poll_group_000", 00:20:23.639 "listen_address": { 00:20:23.639 "trtype": "TCP", 00:20:23.639 "adrfam": "IPv4", 00:20:23.639 "traddr": "10.0.0.2", 00:20:23.639 "trsvcid": "4420" 00:20:23.639 }, 00:20:23.639 "peer_address": { 00:20:23.639 "trtype": "TCP", 00:20:23.639 "adrfam": "IPv4", 00:20:23.639 "traddr": "10.0.0.1", 00:20:23.639 "trsvcid": "55600" 00:20:23.639 }, 00:20:23.639 "auth": { 00:20:23.639 "state": "completed", 00:20:23.639 "digest": "sha256", 00:20:23.639 "dhgroup": "ffdhe3072" 00:20:23.639 } 00:20:23.639 } 00:20:23.639 ]' 00:20:23.639 22:03:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.639 22:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.897 22:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.897 22:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:23.897 22:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.897 22:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.897 22:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.897 22:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.155 22:03:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2E1NGIzMTcxYTNjZGNhNGZjYmJkYzk1YzExYTE5OWQ3Y2NkNDBmNzI5ZmE4M2JiS38vHQ==: --dhchap-ctrl-secret DHHC-1:01:NjljODkwZGZkMzg1YTVlOTJkNTg0OGRiY2VmZGJkOGVsy0VT: 00:20:25.085 22:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.085 22:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.085 22:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.085 22:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.085 22:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.085 22:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.085 22:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:25.085 22:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:25.342 22:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:20:25.342 22:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.342 22:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:25.342 22:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:25.342 22:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:25.342 22:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.342 22:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:25.342 22:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.342 22:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.342 22:03:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.342 22:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:25.342 22:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:25.600 00:20:25.600 22:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.600 22:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.600 22:03:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.857 22:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.857 22:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.857 22:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.857 22:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.857 22:03:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.857 22:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.857 { 00:20:25.857 "cntlid": 23, 00:20:25.857 "qid": 0, 00:20:25.858 "state": "enabled", 00:20:25.858 "thread": "nvmf_tgt_poll_group_000", 00:20:25.858 "listen_address": { 00:20:25.858 "trtype": "TCP", 00:20:25.858 "adrfam": "IPv4", 00:20:25.858 "traddr": "10.0.0.2", 00:20:25.858 "trsvcid": "4420" 00:20:25.858 }, 00:20:25.858 "peer_address": { 00:20:25.858 "trtype": "TCP", 00:20:25.858 "adrfam": "IPv4", 00:20:25.858 "traddr": "10.0.0.1", 00:20:25.858 "trsvcid": "55626" 00:20:25.858 }, 00:20:25.858 "auth": { 00:20:25.858 "state": "completed", 00:20:25.858 "digest": "sha256", 00:20:25.858 "dhgroup": "ffdhe3072" 00:20:25.858 } 00:20:25.858 } 00:20:25.858 ]' 00:20:25.858 22:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.116 22:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.116 22:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.116 22:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:26.116 22:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.116 22:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.116 22:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.116 22:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.374 22:03:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGIwMDcyMDlmODVmZWJkYzUwNDI4MzYzMDNlYTRkMmI4NDM1YzQ3ZTllYzcxMGRmZjk5NzE4NTlmOGY2ZGQ1Odfg3ic=: 00:20:27.306 22:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.306 22:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.306 22:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.306 22:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.306 22:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.306 22:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:27.306 22:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.306 22:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:27.306 22:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:27.565 22:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:20:27.565 22:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.565 22:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:27.565 22:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:27.565 22:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:27.565 22:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.565 22:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.565 22:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.565 22:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.565 22:03:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.565 22:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.565 22:03:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.131 00:20:28.131 22:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.131 22:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.131 22:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.131 22:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.131 22:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.131 22:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.131 22:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.388 22:03:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.388 22:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.388 { 00:20:28.388 "cntlid": 25, 00:20:28.388 "qid": 0, 00:20:28.388 "state": "enabled", 00:20:28.388 "thread": "nvmf_tgt_poll_group_000", 00:20:28.388 "listen_address": { 00:20:28.388 "trtype": "TCP", 00:20:28.388 "adrfam": "IPv4", 00:20:28.388 "traddr": "10.0.0.2", 00:20:28.388 "trsvcid": "4420" 00:20:28.388 }, 00:20:28.388 "peer_address": { 00:20:28.388 "trtype": "TCP", 00:20:28.388 "adrfam": "IPv4", 00:20:28.388 "traddr": "10.0.0.1", 00:20:28.388 "trsvcid": "55666" 00:20:28.388 }, 00:20:28.388 "auth": { 00:20:28.388 "state": "completed", 00:20:28.388 "digest": "sha256", 00:20:28.388 "dhgroup": "ffdhe4096" 00:20:28.388 } 00:20:28.388 } 00:20:28.388 ]' 00:20:28.389 22:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.389 22:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:28.389 22:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.389 22:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:28.389 22:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.389 22:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.389 22:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.389 22:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.646 22:03:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkyNzNiYTJlYmYzZmFmMTViZjM1OTRkMTU0MDg1MmEyZDQ0OGMzNDI5MjZiYmJlkzD1iw==: --dhchap-ctrl-secret DHHC-1:03:NjE4OWVkYmJkMGM0MzJhYmE2MTgyZDY0NDIxYWUwZDI3NGNlNWQ0NzMyZWU3YTg5OTNkY2E3NTc5NjRlY2FhN8pEGEs=: 00:20:29.577 22:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.578 22:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.578 22:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.578 22:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.578 22:03:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.578 22:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.578 22:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:29.578 22:03:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:29.835 22:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:20:29.835 22:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.835 22:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:29.835 22:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:29.835 22:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:29.835 22:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.835 22:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.835 22:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.835 22:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.835 22:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.835 22:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.835 22:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.092 00:20:30.350 22:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.350 22:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.350 22:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.350 22:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.350 22:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.350 22:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.350 22:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.608 22:03:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.608 22:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:30.608 { 00:20:30.608 "cntlid": 27, 00:20:30.608 "qid": 0, 00:20:30.608 "state": "enabled", 00:20:30.608 "thread": "nvmf_tgt_poll_group_000", 00:20:30.608 "listen_address": { 00:20:30.608 "trtype": "TCP", 00:20:30.608 "adrfam": "IPv4", 00:20:30.608 "traddr": "10.0.0.2", 00:20:30.608 "trsvcid": "4420" 00:20:30.608 }, 00:20:30.608 "peer_address": { 00:20:30.608 "trtype": "TCP", 00:20:30.608 "adrfam": "IPv4", 00:20:30.608 "traddr": "10.0.0.1", 00:20:30.608 "trsvcid": "58460" 00:20:30.608 }, 00:20:30.608 "auth": { 00:20:30.608 "state": "completed", 00:20:30.608 "digest": "sha256", 00:20:30.608 "dhgroup": "ffdhe4096" 00:20:30.608 } 00:20:30.608 } 00:20:30.608 ]' 00:20:30.608 22:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.608 22:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:30.608 22:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.608 22:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:30.608 22:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.608 22:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.608 22:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.608 22:03:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.866 22:03:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTVkYmQ3MzgyZDVkNWRhOWExYjBmNGVlYmUwZDg0MGJHHEcY: --dhchap-ctrl-secret DHHC-1:02:YmU3MzUxYmYxOWVmNTI1ZjIwYjFiMTk3Yjk2NzAzZmQwM2RmMjVkZjFlZWYwYWQ0E3C4iA==: 00:20:31.796 22:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.796 22:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.796 22:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.796 22:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.796 22:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.796 22:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.796 22:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:31.796 22:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:32.091 22:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:20:32.091 22:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.091 22:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:32.091 22:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:32.091 22:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:32.091 22:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.091 22:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.091 22:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.091 22:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.091 22:03:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.091 22:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.091 22:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.655 00:20:32.655 22:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.655 22:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.655 22:03:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.655 22:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.655 22:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.655 22:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.655 22:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.655 22:03:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.655 22:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.655 { 00:20:32.655 "cntlid": 29, 00:20:32.655 "qid": 0, 00:20:32.655 "state": "enabled", 00:20:32.655 "thread": "nvmf_tgt_poll_group_000", 00:20:32.655 "listen_address": { 00:20:32.655 "trtype": "TCP", 00:20:32.655 "adrfam": "IPv4", 00:20:32.655 "traddr": "10.0.0.2", 00:20:32.655 "trsvcid": "4420" 00:20:32.655 }, 00:20:32.655 "peer_address": { 00:20:32.655 "trtype": "TCP", 00:20:32.655 "adrfam": "IPv4", 00:20:32.655 "traddr": "10.0.0.1", 00:20:32.655 "trsvcid": "58484" 00:20:32.655 }, 00:20:32.655 "auth": { 00:20:32.655 "state": "completed", 00:20:32.655 "digest": "sha256", 00:20:32.655 "dhgroup": "ffdhe4096" 00:20:32.655 } 00:20:32.655 } 00:20:32.655 ]' 00:20:32.655 22:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.913 22:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:32.913 22:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.913 22:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:32.913 22:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.913 22:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.913 22:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.913 22:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.204 22:03:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2E1NGIzMTcxYTNjZGNhNGZjYmJkYzk1YzExYTE5OWQ3Y2NkNDBmNzI5ZmE4M2JiS38vHQ==: --dhchap-ctrl-secret DHHC-1:01:NjljODkwZGZkMzg1YTVlOTJkNTg0OGRiY2VmZGJkOGVsy0VT: 00:20:34.135 22:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.135 22:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.135 22:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.135 22:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.135 22:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.135 22:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.135 22:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:34.135 22:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:34.393 22:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:20:34.393 22:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.393 22:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:34.393 22:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:34.393 22:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:34.393 22:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.393 22:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:34.393 22:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.393 22:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.393 22:03:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.393 22:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:34.393 22:03:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:34.651 00:20:34.651 22:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.651 22:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.651 22:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.910 22:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.910 22:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.910 22:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.910 22:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.910 22:03:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.910 22:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.910 { 00:20:34.910 "cntlid": 31, 00:20:34.910 "qid": 0, 00:20:34.910 "state": "enabled", 00:20:34.910 "thread": "nvmf_tgt_poll_group_000", 00:20:34.910 "listen_address": { 00:20:34.910 "trtype": "TCP", 00:20:34.910 "adrfam": "IPv4", 00:20:34.910 "traddr": "10.0.0.2", 00:20:34.910 "trsvcid": "4420" 00:20:34.910 }, 00:20:34.910 "peer_address": { 00:20:34.910 "trtype": "TCP", 00:20:34.910 "adrfam": "IPv4", 00:20:34.910 "traddr": "10.0.0.1", 00:20:34.910 "trsvcid": "58502" 00:20:34.910 }, 00:20:34.910 "auth": { 00:20:34.910 "state": "completed", 00:20:34.910 "digest": "sha256", 00:20:34.910 "dhgroup": "ffdhe4096" 00:20:34.910 } 00:20:34.910 } 00:20:34.910 ]' 00:20:34.910 22:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.168 22:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:35.168 22:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.168 22:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:35.168 22:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.168 22:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.168 22:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.168 22:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.427 22:03:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGIwMDcyMDlmODVmZWJkYzUwNDI4MzYzMDNlYTRkMmI4NDM1YzQ3ZTllYzcxMGRmZjk5NzE4NTlmOGY2ZGQ1Odfg3ic=: 00:20:36.362 22:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.362 22:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.362 22:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.362 22:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.362 22:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.362 22:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.362 22:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.362 22:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:36.362 22:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:36.620 22:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:20:36.620 22:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.620 22:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:36.620 22:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:36.620 22:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:36.620 22:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.620 22:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.620 22:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.620 22:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.620 22:03:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.620 22:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.620 22:03:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.187 00:20:37.187 22:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.187 22:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.187 22:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.445 22:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.445 22:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.445 22:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.445 22:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.445 22:03:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.445 22:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.445 { 00:20:37.445 "cntlid": 33, 00:20:37.445 "qid": 0, 00:20:37.445 "state": "enabled", 00:20:37.445 "thread": "nvmf_tgt_poll_group_000", 00:20:37.445 "listen_address": { 00:20:37.445 "trtype": "TCP", 00:20:37.445 "adrfam": "IPv4", 00:20:37.445 "traddr": "10.0.0.2", 00:20:37.445 "trsvcid": "4420" 00:20:37.445 }, 00:20:37.445 "peer_address": { 00:20:37.445 "trtype": "TCP", 00:20:37.445 "adrfam": "IPv4", 00:20:37.445 "traddr": "10.0.0.1", 00:20:37.445 "trsvcid": "58522" 00:20:37.445 }, 00:20:37.445 "auth": { 00:20:37.445 "state": "completed", 00:20:37.445 "digest": "sha256", 00:20:37.445 "dhgroup": "ffdhe6144" 00:20:37.445 } 00:20:37.445 } 00:20:37.445 ]' 00:20:37.445 22:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.445 22:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:37.445 22:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.701 22:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:37.701 22:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.701 22:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.701 22:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.701 22:03:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.957 22:03:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkyNzNiYTJlYmYzZmFmMTViZjM1OTRkMTU0MDg1MmEyZDQ0OGMzNDI5MjZiYmJlkzD1iw==: --dhchap-ctrl-secret DHHC-1:03:NjE4OWVkYmJkMGM0MzJhYmE2MTgyZDY0NDIxYWUwZDI3NGNlNWQ0NzMyZWU3YTg5OTNkY2E3NTc5NjRlY2FhN8pEGEs=: 00:20:38.888 22:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.888 22:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.888 22:03:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.888 22:03:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.888 22:03:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.888 22:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.888 22:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:38.888 22:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:39.146 22:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:20:39.146 22:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:39.146 22:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:39.146 22:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:39.146 22:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:39.146 22:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.146 22:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.146 22:03:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.146 22:03:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.146 22:03:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.146 22:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.146 22:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.711 00:20:39.711 22:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.711 22:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.711 22:03:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.969 22:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.969 22:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.969 22:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.969 22:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.969 22:03:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.969 22:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.969 { 00:20:39.969 "cntlid": 35, 00:20:39.969 "qid": 0, 00:20:39.969 "state": "enabled", 00:20:39.969 "thread": "nvmf_tgt_poll_group_000", 00:20:39.969 "listen_address": { 00:20:39.969 "trtype": "TCP", 00:20:39.969 "adrfam": "IPv4", 00:20:39.969 "traddr": "10.0.0.2", 00:20:39.969 "trsvcid": "4420" 00:20:39.969 }, 00:20:39.969 "peer_address": { 00:20:39.969 "trtype": "TCP", 00:20:39.969 "adrfam": "IPv4", 00:20:39.969 "traddr": "10.0.0.1", 00:20:39.969 "trsvcid": "36754" 00:20:39.969 }, 00:20:39.969 "auth": { 00:20:39.969 "state": "completed", 00:20:39.969 "digest": "sha256", 00:20:39.969 "dhgroup": "ffdhe6144" 00:20:39.969 } 00:20:39.969 } 00:20:39.969 ]' 00:20:39.970 22:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.970 22:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:39.970 22:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.970 22:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:39.970 22:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.970 22:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.970 22:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.970 22:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.228 22:03:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTVkYmQ3MzgyZDVkNWRhOWExYjBmNGVlYmUwZDg0MGJHHEcY: --dhchap-ctrl-secret DHHC-1:02:YmU3MzUxYmYxOWVmNTI1ZjIwYjFiMTk3Yjk2NzAzZmQwM2RmMjVkZjFlZWYwYWQ0E3C4iA==: 00:20:41.604 22:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.604 22:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.604 22:04:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.604 22:04:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.604 22:04:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.604 22:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.604 22:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:41.604 22:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:41.604 22:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:20:41.604 22:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.604 22:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:41.604 22:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:41.604 22:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:41.604 22:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.604 22:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.604 22:04:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.604 22:04:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.604 22:04:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.604 22:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.604 22:04:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.171 00:20:42.171 22:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.171 22:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.171 22:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.429 22:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.429 22:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.429 22:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.429 22:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.429 22:04:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.429 22:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.429 { 00:20:42.429 "cntlid": 37, 00:20:42.429 "qid": 0, 00:20:42.429 "state": "enabled", 00:20:42.429 "thread": "nvmf_tgt_poll_group_000", 00:20:42.429 "listen_address": { 00:20:42.429 "trtype": "TCP", 00:20:42.429 "adrfam": "IPv4", 00:20:42.429 "traddr": "10.0.0.2", 00:20:42.429 "trsvcid": "4420" 00:20:42.429 }, 00:20:42.429 "peer_address": { 00:20:42.429 "trtype": "TCP", 00:20:42.429 "adrfam": "IPv4", 00:20:42.429 "traddr": "10.0.0.1", 00:20:42.429 "trsvcid": "36766" 00:20:42.429 }, 00:20:42.429 "auth": { 00:20:42.429 "state": "completed", 00:20:42.429 "digest": "sha256", 00:20:42.429 "dhgroup": "ffdhe6144" 00:20:42.429 } 00:20:42.429 } 00:20:42.429 ]' 00:20:42.429 22:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.429 22:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:42.429 22:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.429 22:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:42.429 22:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.687 22:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.687 22:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.687 22:04:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.945 22:04:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2E1NGIzMTcxYTNjZGNhNGZjYmJkYzk1YzExYTE5OWQ3Y2NkNDBmNzI5ZmE4M2JiS38vHQ==: --dhchap-ctrl-secret DHHC-1:01:NjljODkwZGZkMzg1YTVlOTJkNTg0OGRiY2VmZGJkOGVsy0VT: 00:20:43.879 22:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.879 22:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.879 22:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.879 22:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.879 22:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.879 22:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.879 22:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:43.879 22:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:44.136 22:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:20:44.136 22:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.136 22:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:44.136 22:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:44.136 22:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:44.136 22:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.136 22:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:44.136 22:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.136 22:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.136 22:04:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.136 22:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:44.136 22:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:44.700 00:20:44.700 22:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.700 22:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.700 22:04:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.958 22:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.958 22:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.958 22:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.958 22:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.958 22:04:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.958 22:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.958 { 00:20:44.958 "cntlid": 39, 00:20:44.958 "qid": 0, 00:20:44.958 "state": "enabled", 00:20:44.958 "thread": "nvmf_tgt_poll_group_000", 00:20:44.958 "listen_address": { 00:20:44.958 "trtype": "TCP", 00:20:44.958 "adrfam": "IPv4", 00:20:44.958 "traddr": "10.0.0.2", 00:20:44.958 "trsvcid": "4420" 00:20:44.958 }, 00:20:44.958 "peer_address": { 00:20:44.958 "trtype": "TCP", 00:20:44.958 "adrfam": "IPv4", 00:20:44.958 "traddr": "10.0.0.1", 00:20:44.958 "trsvcid": "36798" 00:20:44.958 }, 00:20:44.958 "auth": { 00:20:44.958 "state": "completed", 00:20:44.958 "digest": "sha256", 00:20:44.958 "dhgroup": "ffdhe6144" 00:20:44.958 } 00:20:44.958 } 00:20:44.958 ]' 00:20:44.958 22:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.958 22:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:44.958 22:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.958 22:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:44.958 22:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.216 22:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.216 22:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.216 22:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.473 22:04:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGIwMDcyMDlmODVmZWJkYzUwNDI4MzYzMDNlYTRkMmI4NDM1YzQ3ZTllYzcxMGRmZjk5NzE4NTlmOGY2ZGQ1Odfg3ic=: 00:20:46.405 22:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.405 22:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.405 22:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.405 22:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.405 22:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.405 22:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:46.405 22:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.405 22:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:46.405 22:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:46.663 22:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:20:46.663 22:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.663 22:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:46.663 22:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:46.663 22:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:46.663 22:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.663 22:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.663 22:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.663 22:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.663 22:04:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.663 22:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.663 22:04:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.638 00:20:47.638 22:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.638 22:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.638 22:04:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.896 22:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.896 22:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.896 22:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.896 22:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.896 22:04:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.896 22:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.896 { 00:20:47.896 "cntlid": 41, 00:20:47.896 "qid": 0, 00:20:47.896 "state": "enabled", 00:20:47.896 "thread": "nvmf_tgt_poll_group_000", 00:20:47.896 "listen_address": { 00:20:47.896 "trtype": "TCP", 00:20:47.896 "adrfam": "IPv4", 00:20:47.896 "traddr": "10.0.0.2", 00:20:47.896 "trsvcid": "4420" 00:20:47.896 }, 00:20:47.896 "peer_address": { 00:20:47.896 "trtype": "TCP", 00:20:47.896 "adrfam": "IPv4", 00:20:47.896 "traddr": "10.0.0.1", 00:20:47.896 "trsvcid": "36820" 00:20:47.896 }, 00:20:47.896 "auth": { 00:20:47.896 "state": "completed", 00:20:47.896 "digest": "sha256", 00:20:47.896 "dhgroup": "ffdhe8192" 00:20:47.896 } 00:20:47.896 } 00:20:47.896 ]' 00:20:47.896 22:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.896 22:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:47.896 22:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.896 22:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:47.896 22:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.896 22:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.896 22:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.896 22:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.153 22:04:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkyNzNiYTJlYmYzZmFmMTViZjM1OTRkMTU0MDg1MmEyZDQ0OGMzNDI5MjZiYmJlkzD1iw==: --dhchap-ctrl-secret DHHC-1:03:NjE4OWVkYmJkMGM0MzJhYmE2MTgyZDY0NDIxYWUwZDI3NGNlNWQ0NzMyZWU3YTg5OTNkY2E3NTc5NjRlY2FhN8pEGEs=: 00:20:49.086 22:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.086 22:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.086 22:04:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.086 22:04:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.086 22:04:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.086 22:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.086 22:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:49.086 22:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:49.345 22:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:49.345 22:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.345 22:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:49.345 22:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:49.345 22:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:49.345 22:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.345 22:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.345 22:04:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.345 22:04:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.345 22:04:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.345 22:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.345 22:04:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.279 00:20:50.279 22:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.279 22:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.279 22:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.538 22:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.538 22:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.538 22:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.538 22:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.538 22:04:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.538 22:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.538 { 00:20:50.538 "cntlid": 43, 00:20:50.538 "qid": 0, 00:20:50.538 "state": "enabled", 00:20:50.538 "thread": "nvmf_tgt_poll_group_000", 00:20:50.538 "listen_address": { 00:20:50.538 "trtype": "TCP", 00:20:50.538 "adrfam": "IPv4", 00:20:50.538 "traddr": "10.0.0.2", 00:20:50.538 "trsvcid": "4420" 00:20:50.538 }, 00:20:50.538 "peer_address": { 00:20:50.538 "trtype": "TCP", 00:20:50.538 "adrfam": "IPv4", 00:20:50.538 "traddr": "10.0.0.1", 00:20:50.538 "trsvcid": "58826" 00:20:50.538 }, 00:20:50.538 "auth": { 00:20:50.538 "state": "completed", 00:20:50.538 "digest": "sha256", 00:20:50.538 "dhgroup": "ffdhe8192" 00:20:50.538 } 00:20:50.538 } 00:20:50.538 ]' 00:20:50.538 22:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.538 22:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:50.538 22:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.796 22:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:50.796 22:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.796 22:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.796 22:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.796 22:04:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.053 22:04:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTVkYmQ3MzgyZDVkNWRhOWExYjBmNGVlYmUwZDg0MGJHHEcY: --dhchap-ctrl-secret DHHC-1:02:YmU3MzUxYmYxOWVmNTI1ZjIwYjFiMTk3Yjk2NzAzZmQwM2RmMjVkZjFlZWYwYWQ0E3C4iA==: 00:20:51.983 22:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.983 22:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.983 22:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.983 22:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.983 22:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.983 22:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:51.983 22:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:51.983 22:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:52.241 22:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:52.241 22:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:52.241 22:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:52.241 22:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:52.241 22:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:52.241 22:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.241 22:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.241 22:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.241 22:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.241 22:04:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.241 22:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.241 22:04:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.174 00:20:53.174 22:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.174 22:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.174 22:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.174 22:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.174 22:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.174 22:04:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.174 22:04:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.174 22:04:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.174 22:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.174 { 00:20:53.174 "cntlid": 45, 00:20:53.174 "qid": 0, 00:20:53.174 "state": "enabled", 00:20:53.174 "thread": "nvmf_tgt_poll_group_000", 00:20:53.174 "listen_address": { 00:20:53.174 "trtype": "TCP", 00:20:53.174 "adrfam": "IPv4", 00:20:53.174 "traddr": "10.0.0.2", 00:20:53.174 "trsvcid": "4420" 00:20:53.174 }, 00:20:53.174 "peer_address": { 00:20:53.174 "trtype": "TCP", 00:20:53.174 "adrfam": "IPv4", 00:20:53.174 "traddr": "10.0.0.1", 00:20:53.174 "trsvcid": "58850" 00:20:53.174 }, 00:20:53.174 "auth": { 00:20:53.174 "state": "completed", 00:20:53.174 "digest": "sha256", 00:20:53.174 "dhgroup": "ffdhe8192" 00:20:53.174 } 00:20:53.174 } 00:20:53.174 ]' 00:20:53.174 22:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.432 22:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:53.432 22:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.432 22:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:53.432 22:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.432 22:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.432 22:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.432 22:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.689 22:04:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2E1NGIzMTcxYTNjZGNhNGZjYmJkYzk1YzExYTE5OWQ3Y2NkNDBmNzI5ZmE4M2JiS38vHQ==: --dhchap-ctrl-secret DHHC-1:01:NjljODkwZGZkMzg1YTVlOTJkNTg0OGRiY2VmZGJkOGVsy0VT: 00:20:54.621 22:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.621 22:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.621 22:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.621 22:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.621 22:04:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.621 22:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:54.621 22:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:54.621 22:04:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:54.879 22:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:54.879 22:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:54.879 22:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:54.879 22:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:54.879 22:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:54.879 22:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.879 22:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:54.879 22:04:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.879 22:04:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.879 22:04:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.879 22:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:54.879 22:04:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:55.812 00:20:55.812 22:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.812 22:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.812 22:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.070 22:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.070 22:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.070 22:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.070 22:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.070 22:04:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.070 22:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.070 { 00:20:56.070 "cntlid": 47, 00:20:56.070 "qid": 0, 00:20:56.070 "state": "enabled", 00:20:56.070 "thread": "nvmf_tgt_poll_group_000", 00:20:56.070 "listen_address": { 00:20:56.070 "trtype": "TCP", 00:20:56.070 "adrfam": "IPv4", 00:20:56.070 "traddr": "10.0.0.2", 00:20:56.070 "trsvcid": "4420" 00:20:56.070 }, 00:20:56.070 "peer_address": { 00:20:56.070 "trtype": "TCP", 00:20:56.070 "adrfam": "IPv4", 00:20:56.070 "traddr": "10.0.0.1", 00:20:56.070 "trsvcid": "58876" 00:20:56.070 }, 00:20:56.070 "auth": { 00:20:56.070 "state": "completed", 00:20:56.070 "digest": "sha256", 00:20:56.070 "dhgroup": "ffdhe8192" 00:20:56.070 } 00:20:56.070 } 00:20:56.070 ]' 00:20:56.070 22:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:56.070 22:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:56.070 22:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.070 22:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:56.070 22:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.327 22:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.327 22:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.327 22:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.584 22:04:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGIwMDcyMDlmODVmZWJkYzUwNDI4MzYzMDNlYTRkMmI4NDM1YzQ3ZTllYzcxMGRmZjk5NzE4NTlmOGY2ZGQ1Odfg3ic=: 00:20:57.518 22:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.518 22:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.518 22:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.518 22:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.518 22:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.518 22:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:57.518 22:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.518 22:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:57.518 22:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:57.518 22:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:57.776 22:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:57.776 22:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:57.776 22:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:57.776 22:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:57.776 22:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:57.776 22:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.776 22:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.776 22:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.776 22:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.776 22:04:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.776 22:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.776 22:04:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.034 00:20:58.035 22:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.035 22:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.035 22:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.292 22:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.292 22:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.292 22:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.292 22:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.292 22:04:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.292 22:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.292 { 00:20:58.292 "cntlid": 49, 00:20:58.292 "qid": 0, 00:20:58.292 "state": "enabled", 00:20:58.292 "thread": "nvmf_tgt_poll_group_000", 00:20:58.292 "listen_address": { 00:20:58.292 "trtype": "TCP", 00:20:58.292 "adrfam": "IPv4", 00:20:58.292 "traddr": "10.0.0.2", 00:20:58.292 "trsvcid": "4420" 00:20:58.292 }, 00:20:58.292 "peer_address": { 00:20:58.292 "trtype": "TCP", 00:20:58.292 "adrfam": "IPv4", 00:20:58.292 "traddr": "10.0.0.1", 00:20:58.292 "trsvcid": "58898" 00:20:58.292 }, 00:20:58.292 "auth": { 00:20:58.292 "state": "completed", 00:20:58.292 "digest": "sha384", 00:20:58.292 "dhgroup": "null" 00:20:58.292 } 00:20:58.292 } 00:20:58.293 ]' 00:20:58.293 22:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.293 22:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.293 22:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.293 22:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:58.293 22:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.293 22:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.293 22:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.293 22:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.550 22:04:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkyNzNiYTJlYmYzZmFmMTViZjM1OTRkMTU0MDg1MmEyZDQ0OGMzNDI5MjZiYmJlkzD1iw==: --dhchap-ctrl-secret DHHC-1:03:NjE4OWVkYmJkMGM0MzJhYmE2MTgyZDY0NDIxYWUwZDI3NGNlNWQ0NzMyZWU3YTg5OTNkY2E3NTc5NjRlY2FhN8pEGEs=: 00:20:59.483 22:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.483 22:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.483 22:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.483 22:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.483 22:04:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.483 22:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.483 22:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:59.483 22:04:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:59.742 22:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:59.742 22:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.742 22:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:59.742 22:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:59.742 22:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:59.742 22:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.742 22:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.742 22:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.742 22:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.742 22:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.742 22:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.742 22:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.001 00:21:00.259 22:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.259 22:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.259 22:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.259 22:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.259 22:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.259 22:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.259 22:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.259 22:04:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.259 22:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.259 { 00:21:00.259 "cntlid": 51, 00:21:00.259 "qid": 0, 00:21:00.259 "state": "enabled", 00:21:00.259 "thread": "nvmf_tgt_poll_group_000", 00:21:00.259 "listen_address": { 00:21:00.259 "trtype": "TCP", 00:21:00.259 "adrfam": "IPv4", 00:21:00.259 "traddr": "10.0.0.2", 00:21:00.259 "trsvcid": "4420" 00:21:00.259 }, 00:21:00.259 "peer_address": { 00:21:00.259 "trtype": "TCP", 00:21:00.259 "adrfam": "IPv4", 00:21:00.259 "traddr": "10.0.0.1", 00:21:00.259 "trsvcid": "43768" 00:21:00.259 }, 00:21:00.259 "auth": { 00:21:00.259 "state": "completed", 00:21:00.259 "digest": "sha384", 00:21:00.259 "dhgroup": "null" 00:21:00.259 } 00:21:00.259 } 00:21:00.259 ]' 00:21:00.518 22:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.518 22:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.518 22:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.518 22:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:00.518 22:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.518 22:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.518 22:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.518 22:04:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.776 22:04:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTVkYmQ3MzgyZDVkNWRhOWExYjBmNGVlYmUwZDg0MGJHHEcY: --dhchap-ctrl-secret DHHC-1:02:YmU3MzUxYmYxOWVmNTI1ZjIwYjFiMTk3Yjk2NzAzZmQwM2RmMjVkZjFlZWYwYWQ0E3C4iA==: 00:21:01.741 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.741 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.741 22:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.741 22:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.741 22:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.741 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.741 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:01.741 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:01.999 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:21:01.999 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.999 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:01.999 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:01.999 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:01.999 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.999 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.999 22:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.999 22:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.999 22:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.999 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.999 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.257 00:21:02.257 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.257 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.257 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.516 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.516 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.516 22:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.516 22:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.516 22:04:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.516 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.516 { 00:21:02.516 "cntlid": 53, 00:21:02.516 "qid": 0, 00:21:02.516 "state": "enabled", 00:21:02.516 "thread": "nvmf_tgt_poll_group_000", 00:21:02.516 "listen_address": { 00:21:02.516 "trtype": "TCP", 00:21:02.516 "adrfam": "IPv4", 00:21:02.516 "traddr": "10.0.0.2", 00:21:02.516 "trsvcid": "4420" 00:21:02.516 }, 00:21:02.516 "peer_address": { 00:21:02.516 "trtype": "TCP", 00:21:02.516 "adrfam": "IPv4", 00:21:02.516 "traddr": "10.0.0.1", 00:21:02.516 "trsvcid": "43796" 00:21:02.516 }, 00:21:02.516 "auth": { 00:21:02.516 "state": "completed", 00:21:02.516 "digest": "sha384", 00:21:02.516 "dhgroup": "null" 00:21:02.516 } 00:21:02.516 } 00:21:02.516 ]' 00:21:02.516 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.774 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.774 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.774 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:02.774 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.774 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.774 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.774 22:04:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.032 22:04:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2E1NGIzMTcxYTNjZGNhNGZjYmJkYzk1YzExYTE5OWQ3Y2NkNDBmNzI5ZmE4M2JiS38vHQ==: --dhchap-ctrl-secret DHHC-1:01:NjljODkwZGZkMzg1YTVlOTJkNTg0OGRiY2VmZGJkOGVsy0VT: 00:21:03.966 22:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.966 22:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.966 22:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.966 22:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.966 22:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.966 22:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:03.966 22:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:03.966 22:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:04.223 22:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:21:04.223 22:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.223 22:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:04.223 22:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:04.223 22:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:04.223 22:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.223 22:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:04.223 22:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.223 22:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.223 22:04:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.223 22:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:04.223 22:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:04.480 00:21:04.480 22:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.480 22:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.480 22:04:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.738 22:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.738 22:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.738 22:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.738 22:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.738 22:04:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.738 22:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.738 { 00:21:04.738 "cntlid": 55, 00:21:04.738 "qid": 0, 00:21:04.738 "state": "enabled", 00:21:04.738 "thread": "nvmf_tgt_poll_group_000", 00:21:04.738 "listen_address": { 00:21:04.738 "trtype": "TCP", 00:21:04.738 "adrfam": "IPv4", 00:21:04.738 "traddr": "10.0.0.2", 00:21:04.738 "trsvcid": "4420" 00:21:04.738 }, 00:21:04.738 "peer_address": { 00:21:04.738 "trtype": "TCP", 00:21:04.738 "adrfam": "IPv4", 00:21:04.738 "traddr": "10.0.0.1", 00:21:04.738 "trsvcid": "43824" 00:21:04.738 }, 00:21:04.738 "auth": { 00:21:04.738 "state": "completed", 00:21:04.738 "digest": "sha384", 00:21:04.738 "dhgroup": "null" 00:21:04.738 } 00:21:04.738 } 00:21:04.738 ]' 00:21:04.738 22:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:04.738 22:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.738 22:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:04.738 22:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:04.738 22:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:04.996 22:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.996 22:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.996 22:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.253 22:04:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGIwMDcyMDlmODVmZWJkYzUwNDI4MzYzMDNlYTRkMmI4NDM1YzQ3ZTllYzcxMGRmZjk5NzE4NTlmOGY2ZGQ1Odfg3ic=: 00:21:06.185 22:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.185 22:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.185 22:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.185 22:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.185 22:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.185 22:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.185 22:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.185 22:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:06.185 22:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:06.443 22:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:21:06.443 22:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.443 22:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:06.443 22:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:06.443 22:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:06.443 22:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.443 22:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.443 22:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.443 22:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.443 22:04:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.443 22:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.443 22:04:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.701 00:21:06.701 22:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.701 22:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.701 22:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.959 22:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.959 22:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.959 22:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.959 22:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.959 22:04:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.959 22:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.959 { 00:21:06.959 "cntlid": 57, 00:21:06.959 "qid": 0, 00:21:06.959 "state": "enabled", 00:21:06.959 "thread": "nvmf_tgt_poll_group_000", 00:21:06.959 "listen_address": { 00:21:06.959 "trtype": "TCP", 00:21:06.959 "adrfam": "IPv4", 00:21:06.959 "traddr": "10.0.0.2", 00:21:06.959 "trsvcid": "4420" 00:21:06.959 }, 00:21:06.959 "peer_address": { 00:21:06.959 "trtype": "TCP", 00:21:06.959 "adrfam": "IPv4", 00:21:06.959 "traddr": "10.0.0.1", 00:21:06.959 "trsvcid": "43838" 00:21:06.959 }, 00:21:06.959 "auth": { 00:21:06.959 "state": "completed", 00:21:06.959 "digest": "sha384", 00:21:06.959 "dhgroup": "ffdhe2048" 00:21:06.959 } 00:21:06.959 } 00:21:06.959 ]' 00:21:06.959 22:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:07.217 22:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:07.217 22:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.217 22:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:07.217 22:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.217 22:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.217 22:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.217 22:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.475 22:04:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkyNzNiYTJlYmYzZmFmMTViZjM1OTRkMTU0MDg1MmEyZDQ0OGMzNDI5MjZiYmJlkzD1iw==: --dhchap-ctrl-secret DHHC-1:03:NjE4OWVkYmJkMGM0MzJhYmE2MTgyZDY0NDIxYWUwZDI3NGNlNWQ0NzMyZWU3YTg5OTNkY2E3NTc5NjRlY2FhN8pEGEs=: 00:21:08.409 22:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.409 22:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.409 22:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.409 22:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.409 22:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.409 22:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.409 22:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:08.409 22:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:08.667 22:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:21:08.667 22:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.667 22:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:08.667 22:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:08.667 22:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:08.667 22:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.667 22:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.667 22:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.667 22:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.667 22:04:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.667 22:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.667 22:04:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.925 00:21:08.925 22:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:08.925 22:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.925 22:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.183 22:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.183 22:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.183 22:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.183 22:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.183 22:04:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.183 22:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.183 { 00:21:09.183 "cntlid": 59, 00:21:09.183 "qid": 0, 00:21:09.183 "state": "enabled", 00:21:09.183 "thread": "nvmf_tgt_poll_group_000", 00:21:09.183 "listen_address": { 00:21:09.183 "trtype": "TCP", 00:21:09.183 "adrfam": "IPv4", 00:21:09.183 "traddr": "10.0.0.2", 00:21:09.183 "trsvcid": "4420" 00:21:09.183 }, 00:21:09.183 "peer_address": { 00:21:09.183 "trtype": "TCP", 00:21:09.183 "adrfam": "IPv4", 00:21:09.183 "traddr": "10.0.0.1", 00:21:09.183 "trsvcid": "46114" 00:21:09.183 }, 00:21:09.183 "auth": { 00:21:09.183 "state": "completed", 00:21:09.183 "digest": "sha384", 00:21:09.183 "dhgroup": "ffdhe2048" 00:21:09.183 } 00:21:09.183 } 00:21:09.183 ]' 00:21:09.183 22:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.183 22:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.183 22:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.183 22:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:09.183 22:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.441 22:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.441 22:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.441 22:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.699 22:04:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTVkYmQ3MzgyZDVkNWRhOWExYjBmNGVlYmUwZDg0MGJHHEcY: --dhchap-ctrl-secret DHHC-1:02:YmU3MzUxYmYxOWVmNTI1ZjIwYjFiMTk3Yjk2NzAzZmQwM2RmMjVkZjFlZWYwYWQ0E3C4iA==: 00:21:10.633 22:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.633 22:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.633 22:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.633 22:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.633 22:04:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.633 22:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:10.633 22:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:10.633 22:04:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:10.891 22:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:21:10.891 22:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:10.891 22:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:10.891 22:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:10.891 22:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:10.891 22:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.891 22:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.891 22:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.891 22:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.891 22:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.891 22:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.891 22:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:11.149 00:21:11.149 22:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.149 22:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.149 22:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.407 22:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.407 22:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.407 22:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.407 22:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.407 22:04:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.407 22:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.407 { 00:21:11.407 "cntlid": 61, 00:21:11.407 "qid": 0, 00:21:11.407 "state": "enabled", 00:21:11.407 "thread": "nvmf_tgt_poll_group_000", 00:21:11.408 "listen_address": { 00:21:11.408 "trtype": "TCP", 00:21:11.408 "adrfam": "IPv4", 00:21:11.408 "traddr": "10.0.0.2", 00:21:11.408 "trsvcid": "4420" 00:21:11.408 }, 00:21:11.408 "peer_address": { 00:21:11.408 "trtype": "TCP", 00:21:11.408 "adrfam": "IPv4", 00:21:11.408 "traddr": "10.0.0.1", 00:21:11.408 "trsvcid": "46142" 00:21:11.408 }, 00:21:11.408 "auth": { 00:21:11.408 "state": "completed", 00:21:11.408 "digest": "sha384", 00:21:11.408 "dhgroup": "ffdhe2048" 00:21:11.408 } 00:21:11.408 } 00:21:11.408 ]' 00:21:11.408 22:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.408 22:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.408 22:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.408 22:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:11.408 22:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.666 22:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.666 22:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.666 22:04:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.923 22:04:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2E1NGIzMTcxYTNjZGNhNGZjYmJkYzk1YzExYTE5OWQ3Y2NkNDBmNzI5ZmE4M2JiS38vHQ==: --dhchap-ctrl-secret DHHC-1:01:NjljODkwZGZkMzg1YTVlOTJkNTg0OGRiY2VmZGJkOGVsy0VT: 00:21:12.857 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.857 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.857 22:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.857 22:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.857 22:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.857 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:12.857 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:12.857 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:13.114 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:21:13.114 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.114 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:13.114 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:13.114 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:13.114 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.114 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:13.114 22:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.114 22:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.114 22:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.114 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:13.114 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:13.371 00:21:13.371 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.371 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.371 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.630 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.630 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.630 22:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.630 22:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.630 22:04:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.630 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.630 { 00:21:13.630 "cntlid": 63, 00:21:13.630 "qid": 0, 00:21:13.630 "state": "enabled", 00:21:13.630 "thread": "nvmf_tgt_poll_group_000", 00:21:13.630 "listen_address": { 00:21:13.630 "trtype": "TCP", 00:21:13.630 "adrfam": "IPv4", 00:21:13.630 "traddr": "10.0.0.2", 00:21:13.630 "trsvcid": "4420" 00:21:13.630 }, 00:21:13.630 "peer_address": { 00:21:13.630 "trtype": "TCP", 00:21:13.630 "adrfam": "IPv4", 00:21:13.630 "traddr": "10.0.0.1", 00:21:13.630 "trsvcid": "46174" 00:21:13.630 }, 00:21:13.630 "auth": { 00:21:13.630 "state": "completed", 00:21:13.630 "digest": "sha384", 00:21:13.630 "dhgroup": "ffdhe2048" 00:21:13.630 } 00:21:13.630 } 00:21:13.630 ]' 00:21:13.630 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.630 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.630 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:13.630 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:13.630 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:13.630 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.630 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.630 22:04:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.887 22:04:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGIwMDcyMDlmODVmZWJkYzUwNDI4MzYzMDNlYTRkMmI4NDM1YzQ3ZTllYzcxMGRmZjk5NzE4NTlmOGY2ZGQ1Odfg3ic=: 00:21:15.276 22:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.277 22:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.277 22:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.277 22:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.277 22:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.277 22:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.277 22:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.277 22:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:15.277 22:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:15.277 22:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:21:15.277 22:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.277 22:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:15.277 22:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:15.277 22:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:15.277 22:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.277 22:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.277 22:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.277 22:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.277 22:04:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.277 22:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.277 22:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.534 00:21:15.534 22:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.534 22:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.534 22:04:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.792 22:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.792 22:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.792 22:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.792 22:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.792 22:04:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.792 22:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.792 { 00:21:15.792 "cntlid": 65, 00:21:15.792 "qid": 0, 00:21:15.792 "state": "enabled", 00:21:15.792 "thread": "nvmf_tgt_poll_group_000", 00:21:15.792 "listen_address": { 00:21:15.792 "trtype": "TCP", 00:21:15.792 "adrfam": "IPv4", 00:21:15.792 "traddr": "10.0.0.2", 00:21:15.792 "trsvcid": "4420" 00:21:15.792 }, 00:21:15.792 "peer_address": { 00:21:15.792 "trtype": "TCP", 00:21:15.792 "adrfam": "IPv4", 00:21:15.792 "traddr": "10.0.0.1", 00:21:15.792 "trsvcid": "46202" 00:21:15.792 }, 00:21:15.792 "auth": { 00:21:15.792 "state": "completed", 00:21:15.792 "digest": "sha384", 00:21:15.792 "dhgroup": "ffdhe3072" 00:21:15.792 } 00:21:15.792 } 00:21:15.792 ]' 00:21:15.792 22:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.793 22:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:15.793 22:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.050 22:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:16.050 22:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.050 22:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.050 22:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.050 22:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.308 22:04:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkyNzNiYTJlYmYzZmFmMTViZjM1OTRkMTU0MDg1MmEyZDQ0OGMzNDI5MjZiYmJlkzD1iw==: --dhchap-ctrl-secret DHHC-1:03:NjE4OWVkYmJkMGM0MzJhYmE2MTgyZDY0NDIxYWUwZDI3NGNlNWQ0NzMyZWU3YTg5OTNkY2E3NTc5NjRlY2FhN8pEGEs=: 00:21:17.240 22:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.240 22:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.240 22:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.240 22:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.240 22:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.240 22:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.240 22:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:17.240 22:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:17.498 22:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:21:17.498 22:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.498 22:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:17.498 22:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:17.498 22:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:17.498 22:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.498 22:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.498 22:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.498 22:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.498 22:04:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.498 22:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.498 22:04:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.755 00:21:17.755 22:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:17.755 22:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:17.755 22:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.013 22:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.013 22:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.013 22:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.013 22:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.013 22:04:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.013 22:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.013 { 00:21:18.013 "cntlid": 67, 00:21:18.013 "qid": 0, 00:21:18.013 "state": "enabled", 00:21:18.013 "thread": "nvmf_tgt_poll_group_000", 00:21:18.013 "listen_address": { 00:21:18.013 "trtype": "TCP", 00:21:18.013 "adrfam": "IPv4", 00:21:18.013 "traddr": "10.0.0.2", 00:21:18.013 "trsvcid": "4420" 00:21:18.013 }, 00:21:18.013 "peer_address": { 00:21:18.013 "trtype": "TCP", 00:21:18.013 "adrfam": "IPv4", 00:21:18.013 "traddr": "10.0.0.1", 00:21:18.013 "trsvcid": "46224" 00:21:18.013 }, 00:21:18.013 "auth": { 00:21:18.013 "state": "completed", 00:21:18.013 "digest": "sha384", 00:21:18.013 "dhgroup": "ffdhe3072" 00:21:18.013 } 00:21:18.013 } 00:21:18.013 ]' 00:21:18.013 22:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.013 22:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:18.013 22:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.271 22:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:18.271 22:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.271 22:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.271 22:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.271 22:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.528 22:04:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTVkYmQ3MzgyZDVkNWRhOWExYjBmNGVlYmUwZDg0MGJHHEcY: --dhchap-ctrl-secret DHHC-1:02:YmU3MzUxYmYxOWVmNTI1ZjIwYjFiMTk3Yjk2NzAzZmQwM2RmMjVkZjFlZWYwYWQ0E3C4iA==: 00:21:19.462 22:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.462 22:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.462 22:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.462 22:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.462 22:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.462 22:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.462 22:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:19.462 22:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:19.720 22:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:21:19.720 22:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.720 22:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:19.720 22:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:19.720 22:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:19.721 22:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.721 22:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.721 22:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.721 22:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.721 22:04:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.721 22:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.721 22:04:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.978 00:21:19.978 22:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:19.978 22:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:19.978 22:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.236 22:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.236 22:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.236 22:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.236 22:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.236 22:04:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.236 22:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:20.236 { 00:21:20.236 "cntlid": 69, 00:21:20.236 "qid": 0, 00:21:20.236 "state": "enabled", 00:21:20.236 "thread": "nvmf_tgt_poll_group_000", 00:21:20.236 "listen_address": { 00:21:20.236 "trtype": "TCP", 00:21:20.236 "adrfam": "IPv4", 00:21:20.236 "traddr": "10.0.0.2", 00:21:20.236 "trsvcid": "4420" 00:21:20.236 }, 00:21:20.236 "peer_address": { 00:21:20.236 "trtype": "TCP", 00:21:20.236 "adrfam": "IPv4", 00:21:20.236 "traddr": "10.0.0.1", 00:21:20.236 "trsvcid": "60968" 00:21:20.236 }, 00:21:20.236 "auth": { 00:21:20.236 "state": "completed", 00:21:20.236 "digest": "sha384", 00:21:20.236 "dhgroup": "ffdhe3072" 00:21:20.236 } 00:21:20.236 } 00:21:20.236 ]' 00:21:20.236 22:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:20.494 22:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:20.494 22:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.494 22:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:20.494 22:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:20.494 22:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.494 22:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.494 22:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.753 22:04:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2E1NGIzMTcxYTNjZGNhNGZjYmJkYzk1YzExYTE5OWQ3Y2NkNDBmNzI5ZmE4M2JiS38vHQ==: --dhchap-ctrl-secret DHHC-1:01:NjljODkwZGZkMzg1YTVlOTJkNTg0OGRiY2VmZGJkOGVsy0VT: 00:21:21.687 22:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.687 22:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.687 22:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.687 22:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.687 22:04:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.687 22:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:21.687 22:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:21.687 22:04:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:21.945 22:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:21:21.945 22:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:21.945 22:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:21.945 22:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:21.945 22:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:21.945 22:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.945 22:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:21.945 22:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.945 22:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.945 22:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.945 22:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:21.945 22:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:22.204 00:21:22.204 22:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:22.204 22:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.204 22:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:22.462 22:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.462 22:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.462 22:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.462 22:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.462 22:04:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.462 22:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.462 { 00:21:22.462 "cntlid": 71, 00:21:22.462 "qid": 0, 00:21:22.462 "state": "enabled", 00:21:22.462 "thread": "nvmf_tgt_poll_group_000", 00:21:22.462 "listen_address": { 00:21:22.462 "trtype": "TCP", 00:21:22.462 "adrfam": "IPv4", 00:21:22.462 "traddr": "10.0.0.2", 00:21:22.462 "trsvcid": "4420" 00:21:22.462 }, 00:21:22.462 "peer_address": { 00:21:22.462 "trtype": "TCP", 00:21:22.462 "adrfam": "IPv4", 00:21:22.462 "traddr": "10.0.0.1", 00:21:22.462 "trsvcid": "32778" 00:21:22.462 }, 00:21:22.462 "auth": { 00:21:22.462 "state": "completed", 00:21:22.462 "digest": "sha384", 00:21:22.462 "dhgroup": "ffdhe3072" 00:21:22.462 } 00:21:22.462 } 00:21:22.462 ]' 00:21:22.462 22:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:22.462 22:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:22.462 22:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:22.462 22:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:22.462 22:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:22.720 22:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.720 22:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.720 22:04:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.978 22:04:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGIwMDcyMDlmODVmZWJkYzUwNDI4MzYzMDNlYTRkMmI4NDM1YzQ3ZTllYzcxMGRmZjk5NzE4NTlmOGY2ZGQ1Odfg3ic=: 00:21:23.911 22:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.911 22:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.911 22:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.911 22:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.911 22:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.911 22:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.911 22:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.911 22:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:23.911 22:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:24.169 22:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:21:24.169 22:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:24.169 22:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:24.169 22:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:24.169 22:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:24.169 22:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.169 22:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.169 22:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.169 22:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.170 22:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.170 22:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.170 22:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.427 00:21:24.427 22:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.427 22:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.427 22:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.685 22:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.685 22:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.685 22:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.685 22:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.685 22:04:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.685 22:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.685 { 00:21:24.685 "cntlid": 73, 00:21:24.685 "qid": 0, 00:21:24.685 "state": "enabled", 00:21:24.685 "thread": "nvmf_tgt_poll_group_000", 00:21:24.685 "listen_address": { 00:21:24.685 "trtype": "TCP", 00:21:24.685 "adrfam": "IPv4", 00:21:24.685 "traddr": "10.0.0.2", 00:21:24.685 "trsvcid": "4420" 00:21:24.685 }, 00:21:24.685 "peer_address": { 00:21:24.685 "trtype": "TCP", 00:21:24.685 "adrfam": "IPv4", 00:21:24.685 "traddr": "10.0.0.1", 00:21:24.685 "trsvcid": "32802" 00:21:24.685 }, 00:21:24.685 "auth": { 00:21:24.685 "state": "completed", 00:21:24.685 "digest": "sha384", 00:21:24.685 "dhgroup": "ffdhe4096" 00:21:24.685 } 00:21:24.685 } 00:21:24.685 ]' 00:21:24.685 22:04:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.685 22:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:24.685 22:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.685 22:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:24.685 22:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.943 22:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.943 22:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.943 22:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.201 22:04:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkyNzNiYTJlYmYzZmFmMTViZjM1OTRkMTU0MDg1MmEyZDQ0OGMzNDI5MjZiYmJlkzD1iw==: --dhchap-ctrl-secret DHHC-1:03:NjE4OWVkYmJkMGM0MzJhYmE2MTgyZDY0NDIxYWUwZDI3NGNlNWQ0NzMyZWU3YTg5OTNkY2E3NTc5NjRlY2FhN8pEGEs=: 00:21:26.136 22:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.136 22:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.136 22:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.136 22:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.136 22:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.136 22:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:26.136 22:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:26.136 22:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:26.394 22:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:21:26.394 22:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:26.394 22:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:26.394 22:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:26.394 22:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:26.394 22:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.394 22:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.394 22:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.394 22:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.394 22:04:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.394 22:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.394 22:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.652 00:21:26.652 22:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.652 22:04:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.652 22:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.911 22:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.911 22:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.911 22:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.911 22:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.911 22:04:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.911 22:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.911 { 00:21:26.911 "cntlid": 75, 00:21:26.911 "qid": 0, 00:21:26.911 "state": "enabled", 00:21:26.911 "thread": "nvmf_tgt_poll_group_000", 00:21:26.911 "listen_address": { 00:21:26.911 "trtype": "TCP", 00:21:26.911 "adrfam": "IPv4", 00:21:26.911 "traddr": "10.0.0.2", 00:21:26.911 "trsvcid": "4420" 00:21:26.911 }, 00:21:26.911 "peer_address": { 00:21:26.911 "trtype": "TCP", 00:21:26.911 "adrfam": "IPv4", 00:21:26.911 "traddr": "10.0.0.1", 00:21:26.911 "trsvcid": "32828" 00:21:26.911 }, 00:21:26.911 "auth": { 00:21:26.911 "state": "completed", 00:21:26.911 "digest": "sha384", 00:21:26.911 "dhgroup": "ffdhe4096" 00:21:26.911 } 00:21:26.911 } 00:21:26.911 ]' 00:21:26.911 22:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.911 22:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:26.911 22:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.169 22:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:27.169 22:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.169 22:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.169 22:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.169 22:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.427 22:04:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTVkYmQ3MzgyZDVkNWRhOWExYjBmNGVlYmUwZDg0MGJHHEcY: --dhchap-ctrl-secret DHHC-1:02:YmU3MzUxYmYxOWVmNTI1ZjIwYjFiMTk3Yjk2NzAzZmQwM2RmMjVkZjFlZWYwYWQ0E3C4iA==: 00:21:28.380 22:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.380 22:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.380 22:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.380 22:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.380 22:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.380 22:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.380 22:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:28.380 22:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:28.654 22:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:21:28.654 22:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.654 22:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:28.654 22:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:28.654 22:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:28.654 22:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.654 22:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.654 22:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.654 22:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.654 22:04:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.654 22:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.654 22:04:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.220 00:21:29.220 22:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.220 22:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.220 22:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.220 22:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.220 22:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.220 22:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.220 22:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.220 22:04:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.220 22:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.220 { 00:21:29.220 "cntlid": 77, 00:21:29.220 "qid": 0, 00:21:29.220 "state": "enabled", 00:21:29.220 "thread": "nvmf_tgt_poll_group_000", 00:21:29.220 "listen_address": { 00:21:29.220 "trtype": "TCP", 00:21:29.220 "adrfam": "IPv4", 00:21:29.220 "traddr": "10.0.0.2", 00:21:29.220 "trsvcid": "4420" 00:21:29.220 }, 00:21:29.220 "peer_address": { 00:21:29.220 "trtype": "TCP", 00:21:29.220 "adrfam": "IPv4", 00:21:29.220 "traddr": "10.0.0.1", 00:21:29.220 "trsvcid": "43142" 00:21:29.220 }, 00:21:29.220 "auth": { 00:21:29.220 "state": "completed", 00:21:29.220 "digest": "sha384", 00:21:29.220 "dhgroup": "ffdhe4096" 00:21:29.220 } 00:21:29.220 } 00:21:29.221 ]' 00:21:29.221 22:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.478 22:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:29.478 22:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.478 22:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:29.478 22:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.478 22:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.478 22:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.478 22:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.736 22:04:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2E1NGIzMTcxYTNjZGNhNGZjYmJkYzk1YzExYTE5OWQ3Y2NkNDBmNzI5ZmE4M2JiS38vHQ==: --dhchap-ctrl-secret DHHC-1:01:NjljODkwZGZkMzg1YTVlOTJkNTg0OGRiY2VmZGJkOGVsy0VT: 00:21:30.669 22:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.669 22:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.669 22:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.669 22:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.669 22:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.669 22:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.669 22:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:30.669 22:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:30.927 22:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:21:30.927 22:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.927 22:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:30.927 22:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:30.927 22:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:30.927 22:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.927 22:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:30.927 22:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.927 22:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.927 22:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.927 22:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.927 22:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:31.493 00:21:31.493 22:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.493 22:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.493 22:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.750 22:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.750 22:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.750 22:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.750 22:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.750 22:04:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.750 22:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.750 { 00:21:31.750 "cntlid": 79, 00:21:31.750 "qid": 0, 00:21:31.750 "state": "enabled", 00:21:31.750 "thread": "nvmf_tgt_poll_group_000", 00:21:31.750 "listen_address": { 00:21:31.750 "trtype": "TCP", 00:21:31.750 "adrfam": "IPv4", 00:21:31.750 "traddr": "10.0.0.2", 00:21:31.750 "trsvcid": "4420" 00:21:31.750 }, 00:21:31.750 "peer_address": { 00:21:31.750 "trtype": "TCP", 00:21:31.750 "adrfam": "IPv4", 00:21:31.750 "traddr": "10.0.0.1", 00:21:31.750 "trsvcid": "43158" 00:21:31.750 }, 00:21:31.750 "auth": { 00:21:31.750 "state": "completed", 00:21:31.750 "digest": "sha384", 00:21:31.750 "dhgroup": "ffdhe4096" 00:21:31.750 } 00:21:31.750 } 00:21:31.750 ]' 00:21:31.750 22:04:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.750 22:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:31.750 22:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.750 22:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:31.750 22:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.750 22:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.750 22:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.750 22:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.007 22:04:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGIwMDcyMDlmODVmZWJkYzUwNDI4MzYzMDNlYTRkMmI4NDM1YzQ3ZTllYzcxMGRmZjk5NzE4NTlmOGY2ZGQ1Odfg3ic=: 00:21:32.947 22:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.205 22:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.205 22:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.205 22:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.205 22:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.205 22:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:33.205 22:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.205 22:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:33.205 22:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:33.464 22:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:21:33.464 22:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.464 22:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:33.464 22:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:33.464 22:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:33.464 22:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.464 22:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.464 22:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.464 22:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.464 22:04:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.464 22:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.464 22:04:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.033 00:21:34.033 22:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.033 22:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.033 22:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.293 22:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.293 22:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.293 22:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.293 22:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.293 22:04:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.293 22:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.293 { 00:21:34.293 "cntlid": 81, 00:21:34.293 "qid": 0, 00:21:34.293 "state": "enabled", 00:21:34.293 "thread": "nvmf_tgt_poll_group_000", 00:21:34.293 "listen_address": { 00:21:34.293 "trtype": "TCP", 00:21:34.293 "adrfam": "IPv4", 00:21:34.293 "traddr": "10.0.0.2", 00:21:34.293 "trsvcid": "4420" 00:21:34.293 }, 00:21:34.293 "peer_address": { 00:21:34.293 "trtype": "TCP", 00:21:34.293 "adrfam": "IPv4", 00:21:34.293 "traddr": "10.0.0.1", 00:21:34.293 "trsvcid": "43184" 00:21:34.293 }, 00:21:34.293 "auth": { 00:21:34.293 "state": "completed", 00:21:34.293 "digest": "sha384", 00:21:34.293 "dhgroup": "ffdhe6144" 00:21:34.293 } 00:21:34.293 } 00:21:34.293 ]' 00:21:34.293 22:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.293 22:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:34.293 22:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.293 22:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:34.293 22:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:34.293 22:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.293 22:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.293 22:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.551 22:04:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkyNzNiYTJlYmYzZmFmMTViZjM1OTRkMTU0MDg1MmEyZDQ0OGMzNDI5MjZiYmJlkzD1iw==: --dhchap-ctrl-secret DHHC-1:03:NjE4OWVkYmJkMGM0MzJhYmE2MTgyZDY0NDIxYWUwZDI3NGNlNWQ0NzMyZWU3YTg5OTNkY2E3NTc5NjRlY2FhN8pEGEs=: 00:21:35.482 22:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.739 22:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.739 22:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.739 22:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.739 22:04:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.739 22:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.739 22:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:35.739 22:04:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:35.996 22:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:21:35.996 22:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.996 22:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:35.996 22:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:35.996 22:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:35.996 22:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.996 22:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.996 22:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.996 22:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.996 22:04:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.996 22:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.996 22:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.562 00:21:36.562 22:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:36.562 22:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:36.562 22:04:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.820 22:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.820 22:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.820 22:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.820 22:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.820 22:04:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.820 22:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:36.820 { 00:21:36.820 "cntlid": 83, 00:21:36.820 "qid": 0, 00:21:36.820 "state": "enabled", 00:21:36.820 "thread": "nvmf_tgt_poll_group_000", 00:21:36.820 "listen_address": { 00:21:36.820 "trtype": "TCP", 00:21:36.820 "adrfam": "IPv4", 00:21:36.820 "traddr": "10.0.0.2", 00:21:36.820 "trsvcid": "4420" 00:21:36.820 }, 00:21:36.820 "peer_address": { 00:21:36.820 "trtype": "TCP", 00:21:36.820 "adrfam": "IPv4", 00:21:36.820 "traddr": "10.0.0.1", 00:21:36.820 "trsvcid": "43204" 00:21:36.820 }, 00:21:36.820 "auth": { 00:21:36.820 "state": "completed", 00:21:36.820 "digest": "sha384", 00:21:36.820 "dhgroup": "ffdhe6144" 00:21:36.820 } 00:21:36.820 } 00:21:36.820 ]' 00:21:36.820 22:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:36.820 22:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:36.820 22:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:36.820 22:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:36.820 22:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:36.820 22:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.820 22:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.820 22:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.078 22:04:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTVkYmQ3MzgyZDVkNWRhOWExYjBmNGVlYmUwZDg0MGJHHEcY: --dhchap-ctrl-secret DHHC-1:02:YmU3MzUxYmYxOWVmNTI1ZjIwYjFiMTk3Yjk2NzAzZmQwM2RmMjVkZjFlZWYwYWQ0E3C4iA==: 00:21:38.450 22:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.450 22:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.450 22:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.450 22:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.450 22:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.450 22:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:38.450 22:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:38.450 22:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:38.450 22:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:21:38.450 22:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:38.450 22:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:38.450 22:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:38.450 22:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:38.450 22:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.450 22:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.450 22:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.450 22:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.450 22:04:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.450 22:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.450 22:04:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.016 00:21:39.016 22:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:39.016 22:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:39.016 22:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.273 22:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.273 22:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.273 22:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.273 22:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.273 22:04:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.273 22:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:39.273 { 00:21:39.273 "cntlid": 85, 00:21:39.273 "qid": 0, 00:21:39.273 "state": "enabled", 00:21:39.273 "thread": "nvmf_tgt_poll_group_000", 00:21:39.273 "listen_address": { 00:21:39.273 "trtype": "TCP", 00:21:39.273 "adrfam": "IPv4", 00:21:39.273 "traddr": "10.0.0.2", 00:21:39.273 "trsvcid": "4420" 00:21:39.273 }, 00:21:39.273 "peer_address": { 00:21:39.273 "trtype": "TCP", 00:21:39.273 "adrfam": "IPv4", 00:21:39.273 "traddr": "10.0.0.1", 00:21:39.273 "trsvcid": "39596" 00:21:39.273 }, 00:21:39.273 "auth": { 00:21:39.273 "state": "completed", 00:21:39.273 "digest": "sha384", 00:21:39.273 "dhgroup": "ffdhe6144" 00:21:39.273 } 00:21:39.273 } 00:21:39.273 ]' 00:21:39.273 22:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:39.273 22:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:39.273 22:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:39.273 22:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:39.273 22:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:39.531 22:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.531 22:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.531 22:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.789 22:04:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2E1NGIzMTcxYTNjZGNhNGZjYmJkYzk1YzExYTE5OWQ3Y2NkNDBmNzI5ZmE4M2JiS38vHQ==: --dhchap-ctrl-secret DHHC-1:01:NjljODkwZGZkMzg1YTVlOTJkNTg0OGRiY2VmZGJkOGVsy0VT: 00:21:40.721 22:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.721 22:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.721 22:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.721 22:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.721 22:04:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.721 22:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:40.721 22:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:40.721 22:04:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:40.979 22:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:21:40.980 22:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:40.980 22:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:40.980 22:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:40.980 22:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:40.980 22:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.980 22:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:40.980 22:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.980 22:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.980 22:05:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.980 22:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:40.980 22:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:41.545 00:21:41.545 22:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:41.545 22:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:41.545 22:05:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.803 22:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.803 22:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.803 22:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.803 22:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.803 22:05:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.803 22:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:41.803 { 00:21:41.803 "cntlid": 87, 00:21:41.803 "qid": 0, 00:21:41.803 "state": "enabled", 00:21:41.803 "thread": "nvmf_tgt_poll_group_000", 00:21:41.803 "listen_address": { 00:21:41.803 "trtype": "TCP", 00:21:41.803 "adrfam": "IPv4", 00:21:41.803 "traddr": "10.0.0.2", 00:21:41.803 "trsvcid": "4420" 00:21:41.803 }, 00:21:41.803 "peer_address": { 00:21:41.803 "trtype": "TCP", 00:21:41.803 "adrfam": "IPv4", 00:21:41.803 "traddr": "10.0.0.1", 00:21:41.803 "trsvcid": "39626" 00:21:41.803 }, 00:21:41.803 "auth": { 00:21:41.803 "state": "completed", 00:21:41.803 "digest": "sha384", 00:21:41.803 "dhgroup": "ffdhe6144" 00:21:41.803 } 00:21:41.803 } 00:21:41.803 ]' 00:21:41.803 22:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:41.803 22:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:41.803 22:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:41.803 22:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:41.803 22:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:41.803 22:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.803 22:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.803 22:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.061 22:05:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGIwMDcyMDlmODVmZWJkYzUwNDI4MzYzMDNlYTRkMmI4NDM1YzQ3ZTllYzcxMGRmZjk5NzE4NTlmOGY2ZGQ1Odfg3ic=: 00:21:43.033 22:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.033 22:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.033 22:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.033 22:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.033 22:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.033 22:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:43.033 22:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:43.033 22:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:43.033 22:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:43.292 22:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:21:43.292 22:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:43.292 22:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:43.292 22:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:43.292 22:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:43.292 22:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.292 22:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.292 22:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.292 22:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.292 22:05:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.292 22:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.292 22:05:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.227 00:21:44.227 22:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:44.227 22:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:44.227 22:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.484 22:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.484 22:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.484 22:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.484 22:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.484 22:05:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.484 22:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:44.484 { 00:21:44.484 "cntlid": 89, 00:21:44.484 "qid": 0, 00:21:44.484 "state": "enabled", 00:21:44.484 "thread": "nvmf_tgt_poll_group_000", 00:21:44.484 "listen_address": { 00:21:44.484 "trtype": "TCP", 00:21:44.484 "adrfam": "IPv4", 00:21:44.484 "traddr": "10.0.0.2", 00:21:44.484 "trsvcid": "4420" 00:21:44.484 }, 00:21:44.484 "peer_address": { 00:21:44.484 "trtype": "TCP", 00:21:44.484 "adrfam": "IPv4", 00:21:44.484 "traddr": "10.0.0.1", 00:21:44.484 "trsvcid": "39652" 00:21:44.484 }, 00:21:44.484 "auth": { 00:21:44.484 "state": "completed", 00:21:44.484 "digest": "sha384", 00:21:44.484 "dhgroup": "ffdhe8192" 00:21:44.484 } 00:21:44.484 } 00:21:44.484 ]' 00:21:44.484 22:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:44.484 22:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:44.484 22:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:44.484 22:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:44.484 22:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:44.485 22:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.485 22:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.485 22:05:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.742 22:05:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkyNzNiYTJlYmYzZmFmMTViZjM1OTRkMTU0MDg1MmEyZDQ0OGMzNDI5MjZiYmJlkzD1iw==: --dhchap-ctrl-secret DHHC-1:03:NjE4OWVkYmJkMGM0MzJhYmE2MTgyZDY0NDIxYWUwZDI3NGNlNWQ0NzMyZWU3YTg5OTNkY2E3NTc5NjRlY2FhN8pEGEs=: 00:21:45.675 22:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.933 22:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.933 22:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.933 22:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.933 22:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.933 22:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:45.933 22:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:45.934 22:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:46.192 22:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:21:46.192 22:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:46.192 22:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:46.192 22:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:46.192 22:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:46.192 22:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.192 22:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.192 22:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.192 22:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.192 22:05:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.192 22:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.192 22:05:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.127 00:21:47.127 22:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:47.127 22:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:47.127 22:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.385 22:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.386 22:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.386 22:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.386 22:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.386 22:05:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.386 22:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:47.386 { 00:21:47.386 "cntlid": 91, 00:21:47.386 "qid": 0, 00:21:47.386 "state": "enabled", 00:21:47.386 "thread": "nvmf_tgt_poll_group_000", 00:21:47.386 "listen_address": { 00:21:47.386 "trtype": "TCP", 00:21:47.386 "adrfam": "IPv4", 00:21:47.386 "traddr": "10.0.0.2", 00:21:47.386 "trsvcid": "4420" 00:21:47.386 }, 00:21:47.386 "peer_address": { 00:21:47.386 "trtype": "TCP", 00:21:47.386 "adrfam": "IPv4", 00:21:47.386 "traddr": "10.0.0.1", 00:21:47.386 "trsvcid": "39684" 00:21:47.386 }, 00:21:47.386 "auth": { 00:21:47.386 "state": "completed", 00:21:47.386 "digest": "sha384", 00:21:47.386 "dhgroup": "ffdhe8192" 00:21:47.386 } 00:21:47.386 } 00:21:47.386 ]' 00:21:47.386 22:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:47.386 22:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:47.386 22:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:47.386 22:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:47.386 22:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:47.386 22:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.386 22:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.386 22:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.644 22:05:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTVkYmQ3MzgyZDVkNWRhOWExYjBmNGVlYmUwZDg0MGJHHEcY: --dhchap-ctrl-secret DHHC-1:02:YmU3MzUxYmYxOWVmNTI1ZjIwYjFiMTk3Yjk2NzAzZmQwM2RmMjVkZjFlZWYwYWQ0E3C4iA==: 00:21:48.578 22:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.578 22:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:48.578 22:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.578 22:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.578 22:05:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.578 22:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:48.578 22:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:48.578 22:05:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:49.146 22:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:21:49.146 22:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:49.146 22:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:49.146 22:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:49.146 22:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:49.146 22:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.146 22:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.146 22:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.146 22:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.146 22:05:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.146 22:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.146 22:05:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.714 00:21:49.715 22:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:49.715 22:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:49.715 22:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.973 22:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.973 22:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.973 22:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.973 22:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.973 22:05:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.973 22:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:49.973 { 00:21:49.973 "cntlid": 93, 00:21:49.973 "qid": 0, 00:21:49.973 "state": "enabled", 00:21:49.973 "thread": "nvmf_tgt_poll_group_000", 00:21:49.973 "listen_address": { 00:21:49.973 "trtype": "TCP", 00:21:49.973 "adrfam": "IPv4", 00:21:49.973 "traddr": "10.0.0.2", 00:21:49.973 "trsvcid": "4420" 00:21:49.973 }, 00:21:49.973 "peer_address": { 00:21:49.973 "trtype": "TCP", 00:21:49.973 "adrfam": "IPv4", 00:21:49.973 "traddr": "10.0.0.1", 00:21:49.973 "trsvcid": "43736" 00:21:49.973 }, 00:21:49.973 "auth": { 00:21:49.973 "state": "completed", 00:21:49.973 "digest": "sha384", 00:21:49.973 "dhgroup": "ffdhe8192" 00:21:49.973 } 00:21:49.973 } 00:21:49.973 ]' 00:21:49.973 22:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:50.232 22:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:50.232 22:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:50.232 22:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:50.232 22:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:50.232 22:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.232 22:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.232 22:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.490 22:05:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2E1NGIzMTcxYTNjZGNhNGZjYmJkYzk1YzExYTE5OWQ3Y2NkNDBmNzI5ZmE4M2JiS38vHQ==: --dhchap-ctrl-secret DHHC-1:01:NjljODkwZGZkMzg1YTVlOTJkNTg0OGRiY2VmZGJkOGVsy0VT: 00:21:51.427 22:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.427 22:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.427 22:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.427 22:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.427 22:05:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.427 22:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:51.427 22:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:51.427 22:05:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:51.684 22:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:51.684 22:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:51.684 22:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:51.684 22:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:51.684 22:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:51.684 22:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.684 22:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:51.684 22:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.684 22:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.684 22:05:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.684 22:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:51.684 22:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:52.620 00:21:52.620 22:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:52.620 22:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:52.620 22:05:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.878 22:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.878 22:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.878 22:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.878 22:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.878 22:05:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.878 22:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:52.878 { 00:21:52.878 "cntlid": 95, 00:21:52.878 "qid": 0, 00:21:52.878 "state": "enabled", 00:21:52.878 "thread": "nvmf_tgt_poll_group_000", 00:21:52.878 "listen_address": { 00:21:52.878 "trtype": "TCP", 00:21:52.878 "adrfam": "IPv4", 00:21:52.878 "traddr": "10.0.0.2", 00:21:52.878 "trsvcid": "4420" 00:21:52.878 }, 00:21:52.878 "peer_address": { 00:21:52.878 "trtype": "TCP", 00:21:52.878 "adrfam": "IPv4", 00:21:52.878 "traddr": "10.0.0.1", 00:21:52.878 "trsvcid": "43762" 00:21:52.878 }, 00:21:52.878 "auth": { 00:21:52.878 "state": "completed", 00:21:52.878 "digest": "sha384", 00:21:52.878 "dhgroup": "ffdhe8192" 00:21:52.878 } 00:21:52.878 } 00:21:52.878 ]' 00:21:52.878 22:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:52.878 22:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:52.878 22:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:53.135 22:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:53.135 22:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:53.135 22:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.135 22:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.136 22:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.393 22:05:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGIwMDcyMDlmODVmZWJkYzUwNDI4MzYzMDNlYTRkMmI4NDM1YzQ3ZTllYzcxMGRmZjk5NzE4NTlmOGY2ZGQ1Odfg3ic=: 00:21:54.344 22:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.344 22:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.344 22:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.344 22:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.344 22:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.344 22:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:54.344 22:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:54.344 22:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:54.344 22:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:54.344 22:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:54.601 22:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:54.601 22:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:54.601 22:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:54.601 22:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:54.601 22:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:54.601 22:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.601 22:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.601 22:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.601 22:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.601 22:05:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.601 22:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.601 22:05:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.859 00:21:54.859 22:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:54.859 22:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:54.859 22:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.117 22:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.117 22:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.117 22:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.117 22:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.117 22:05:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.117 22:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:55.117 { 00:21:55.117 "cntlid": 97, 00:21:55.117 "qid": 0, 00:21:55.117 "state": "enabled", 00:21:55.117 "thread": "nvmf_tgt_poll_group_000", 00:21:55.117 "listen_address": { 00:21:55.117 "trtype": "TCP", 00:21:55.117 "adrfam": "IPv4", 00:21:55.117 "traddr": "10.0.0.2", 00:21:55.117 "trsvcid": "4420" 00:21:55.117 }, 00:21:55.117 "peer_address": { 00:21:55.117 "trtype": "TCP", 00:21:55.117 "adrfam": "IPv4", 00:21:55.117 "traddr": "10.0.0.1", 00:21:55.117 "trsvcid": "43792" 00:21:55.117 }, 00:21:55.117 "auth": { 00:21:55.117 "state": "completed", 00:21:55.117 "digest": "sha512", 00:21:55.117 "dhgroup": "null" 00:21:55.117 } 00:21:55.117 } 00:21:55.117 ]' 00:21:55.117 22:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:55.375 22:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.375 22:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:55.375 22:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:55.375 22:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:55.375 22:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.375 22:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.375 22:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.634 22:05:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkyNzNiYTJlYmYzZmFmMTViZjM1OTRkMTU0MDg1MmEyZDQ0OGMzNDI5MjZiYmJlkzD1iw==: --dhchap-ctrl-secret DHHC-1:03:NjE4OWVkYmJkMGM0MzJhYmE2MTgyZDY0NDIxYWUwZDI3NGNlNWQ0NzMyZWU3YTg5OTNkY2E3NTc5NjRlY2FhN8pEGEs=: 00:21:56.576 22:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.576 22:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.576 22:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.576 22:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.576 22:05:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.576 22:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:56.576 22:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:56.576 22:05:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:56.878 22:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:56.878 22:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:56.878 22:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:56.878 22:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:56.878 22:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:56.878 22:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.878 22:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.878 22:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.878 22:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.878 22:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.878 22:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.878 22:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.137 00:21:57.137 22:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:57.137 22:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:57.137 22:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.395 22:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.395 22:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.395 22:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.395 22:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.395 22:05:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.395 22:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:57.395 { 00:21:57.395 "cntlid": 99, 00:21:57.395 "qid": 0, 00:21:57.395 "state": "enabled", 00:21:57.395 "thread": "nvmf_tgt_poll_group_000", 00:21:57.395 "listen_address": { 00:21:57.395 "trtype": "TCP", 00:21:57.395 "adrfam": "IPv4", 00:21:57.395 "traddr": "10.0.0.2", 00:21:57.395 "trsvcid": "4420" 00:21:57.395 }, 00:21:57.395 "peer_address": { 00:21:57.395 "trtype": "TCP", 00:21:57.395 "adrfam": "IPv4", 00:21:57.395 "traddr": "10.0.0.1", 00:21:57.395 "trsvcid": "43820" 00:21:57.395 }, 00:21:57.395 "auth": { 00:21:57.395 "state": "completed", 00:21:57.395 "digest": "sha512", 00:21:57.395 "dhgroup": "null" 00:21:57.395 } 00:21:57.395 } 00:21:57.395 ]' 00:21:57.395 22:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:57.653 22:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.653 22:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:57.653 22:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:57.653 22:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:57.653 22:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.653 22:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.653 22:05:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.911 22:05:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTVkYmQ3MzgyZDVkNWRhOWExYjBmNGVlYmUwZDg0MGJHHEcY: --dhchap-ctrl-secret DHHC-1:02:YmU3MzUxYmYxOWVmNTI1ZjIwYjFiMTk3Yjk2NzAzZmQwM2RmMjVkZjFlZWYwYWQ0E3C4iA==: 00:21:58.848 22:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.848 22:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:58.848 22:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.848 22:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.848 22:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.848 22:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:58.848 22:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:58.848 22:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:59.107 22:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:59.107 22:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:59.107 22:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:59.107 22:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:59.107 22:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:59.107 22:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.107 22:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.107 22:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.107 22:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.107 22:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.107 22:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.107 22:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.365 00:21:59.365 22:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:59.365 22:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:59.365 22:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.623 22:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.623 22:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.623 22:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.623 22:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.623 22:05:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.623 22:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:59.623 { 00:21:59.623 "cntlid": 101, 00:21:59.623 "qid": 0, 00:21:59.623 "state": "enabled", 00:21:59.623 "thread": "nvmf_tgt_poll_group_000", 00:21:59.623 "listen_address": { 00:21:59.623 "trtype": "TCP", 00:21:59.623 "adrfam": "IPv4", 00:21:59.623 "traddr": "10.0.0.2", 00:21:59.623 "trsvcid": "4420" 00:21:59.623 }, 00:21:59.623 "peer_address": { 00:21:59.623 "trtype": "TCP", 00:21:59.623 "adrfam": "IPv4", 00:21:59.623 "traddr": "10.0.0.1", 00:21:59.623 "trsvcid": "44378" 00:21:59.623 }, 00:21:59.623 "auth": { 00:21:59.623 "state": "completed", 00:21:59.623 "digest": "sha512", 00:21:59.623 "dhgroup": "null" 00:21:59.623 } 00:21:59.623 } 00:21:59.623 ]' 00:21:59.623 22:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:59.623 22:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.623 22:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:59.623 22:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:59.623 22:05:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:59.880 22:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.880 22:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.880 22:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.880 22:05:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2E1NGIzMTcxYTNjZGNhNGZjYmJkYzk1YzExYTE5OWQ3Y2NkNDBmNzI5ZmE4M2JiS38vHQ==: --dhchap-ctrl-secret DHHC-1:01:NjljODkwZGZkMzg1YTVlOTJkNTg0OGRiY2VmZGJkOGVsy0VT: 00:22:01.255 22:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.255 22:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.255 22:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.255 22:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.255 22:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.255 22:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:01.255 22:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:01.255 22:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:01.255 22:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:22:01.255 22:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:01.255 22:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:01.255 22:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:01.255 22:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:01.255 22:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.255 22:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:01.255 22:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.255 22:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.255 22:05:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.255 22:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:01.255 22:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:01.512 00:22:01.512 22:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:01.512 22:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:01.512 22:05:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.770 22:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.770 22:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.770 22:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.770 22:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.770 22:05:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.770 22:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:01.770 { 00:22:01.770 "cntlid": 103, 00:22:01.770 "qid": 0, 00:22:01.770 "state": "enabled", 00:22:01.770 "thread": "nvmf_tgt_poll_group_000", 00:22:01.770 "listen_address": { 00:22:01.770 "trtype": "TCP", 00:22:01.770 "adrfam": "IPv4", 00:22:01.770 "traddr": "10.0.0.2", 00:22:01.770 "trsvcid": "4420" 00:22:01.770 }, 00:22:01.770 "peer_address": { 00:22:01.770 "trtype": "TCP", 00:22:01.770 "adrfam": "IPv4", 00:22:01.770 "traddr": "10.0.0.1", 00:22:01.770 "trsvcid": "44392" 00:22:01.770 }, 00:22:01.770 "auth": { 00:22:01.770 "state": "completed", 00:22:01.770 "digest": "sha512", 00:22:01.770 "dhgroup": "null" 00:22:01.770 } 00:22:01.770 } 00:22:01.770 ]' 00:22:01.770 22:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:01.770 22:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:01.770 22:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:01.770 22:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:22:01.770 22:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:02.029 22:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.029 22:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.029 22:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.288 22:05:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGIwMDcyMDlmODVmZWJkYzUwNDI4MzYzMDNlYTRkMmI4NDM1YzQ3ZTllYzcxMGRmZjk5NzE4NTlmOGY2ZGQ1Odfg3ic=: 00:22:03.223 22:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.223 22:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.223 22:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.223 22:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.223 22:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.223 22:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:03.223 22:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:03.223 22:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:03.224 22:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:03.481 22:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:22:03.481 22:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:03.481 22:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:03.481 22:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:03.481 22:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:03.481 22:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.481 22:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.481 22:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.481 22:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.481 22:05:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.481 22:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.481 22:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.739 00:22:03.739 22:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:03.739 22:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.739 22:05:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:03.997 22:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.997 22:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.997 22:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.997 22:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.997 22:05:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.997 22:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:03.997 { 00:22:03.997 "cntlid": 105, 00:22:03.997 "qid": 0, 00:22:03.997 "state": "enabled", 00:22:03.997 "thread": "nvmf_tgt_poll_group_000", 00:22:03.997 "listen_address": { 00:22:03.997 "trtype": "TCP", 00:22:03.997 "adrfam": "IPv4", 00:22:03.997 "traddr": "10.0.0.2", 00:22:03.997 "trsvcid": "4420" 00:22:03.997 }, 00:22:03.997 "peer_address": { 00:22:03.997 "trtype": "TCP", 00:22:03.998 "adrfam": "IPv4", 00:22:03.998 "traddr": "10.0.0.1", 00:22:03.998 "trsvcid": "44420" 00:22:03.998 }, 00:22:03.998 "auth": { 00:22:03.998 "state": "completed", 00:22:03.998 "digest": "sha512", 00:22:03.998 "dhgroup": "ffdhe2048" 00:22:03.998 } 00:22:03.998 } 00:22:03.998 ]' 00:22:03.998 22:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:03.998 22:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.998 22:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:03.998 22:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:03.998 22:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:03.998 22:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.998 22:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.998 22:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.257 22:05:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkyNzNiYTJlYmYzZmFmMTViZjM1OTRkMTU0MDg1MmEyZDQ0OGMzNDI5MjZiYmJlkzD1iw==: --dhchap-ctrl-secret DHHC-1:03:NjE4OWVkYmJkMGM0MzJhYmE2MTgyZDY0NDIxYWUwZDI3NGNlNWQ0NzMyZWU3YTg5OTNkY2E3NTc5NjRlY2FhN8pEGEs=: 00:22:05.193 22:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.193 22:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.193 22:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.193 22:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.452 22:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.452 22:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:05.452 22:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:05.452 22:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:05.452 22:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:22:05.452 22:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:05.452 22:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:05.452 22:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:05.452 22:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:05.452 22:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.452 22:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.452 22:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.452 22:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.452 22:05:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.452 22:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.452 22:05:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.019 00:22:06.019 22:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:06.019 22:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:06.019 22:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.278 22:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.278 22:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.278 22:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.278 22:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.278 22:05:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.278 22:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:06.278 { 00:22:06.278 "cntlid": 107, 00:22:06.278 "qid": 0, 00:22:06.278 "state": "enabled", 00:22:06.278 "thread": "nvmf_tgt_poll_group_000", 00:22:06.278 "listen_address": { 00:22:06.278 "trtype": "TCP", 00:22:06.278 "adrfam": "IPv4", 00:22:06.278 "traddr": "10.0.0.2", 00:22:06.278 "trsvcid": "4420" 00:22:06.278 }, 00:22:06.278 "peer_address": { 00:22:06.278 "trtype": "TCP", 00:22:06.278 "adrfam": "IPv4", 00:22:06.278 "traddr": "10.0.0.1", 00:22:06.278 "trsvcid": "44454" 00:22:06.278 }, 00:22:06.278 "auth": { 00:22:06.278 "state": "completed", 00:22:06.278 "digest": "sha512", 00:22:06.278 "dhgroup": "ffdhe2048" 00:22:06.278 } 00:22:06.278 } 00:22:06.278 ]' 00:22:06.278 22:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:06.278 22:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.278 22:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:06.278 22:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:06.278 22:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:06.278 22:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.278 22:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.278 22:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.536 22:05:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTVkYmQ3MzgyZDVkNWRhOWExYjBmNGVlYmUwZDg0MGJHHEcY: --dhchap-ctrl-secret DHHC-1:02:YmU3MzUxYmYxOWVmNTI1ZjIwYjFiMTk3Yjk2NzAzZmQwM2RmMjVkZjFlZWYwYWQ0E3C4iA==: 00:22:07.472 22:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.472 22:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.473 22:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.473 22:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.473 22:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.473 22:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:07.473 22:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:07.473 22:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:07.730 22:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:22:07.730 22:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:07.730 22:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:07.730 22:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:07.730 22:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:07.730 22:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.730 22:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.730 22:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.730 22:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.730 22:05:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.730 22:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.730 22:05:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.988 00:22:07.988 22:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:07.988 22:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:07.988 22:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.246 22:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.246 22:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.246 22:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.246 22:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.246 22:05:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.246 22:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:08.246 { 00:22:08.246 "cntlid": 109, 00:22:08.246 "qid": 0, 00:22:08.246 "state": "enabled", 00:22:08.246 "thread": "nvmf_tgt_poll_group_000", 00:22:08.246 "listen_address": { 00:22:08.246 "trtype": "TCP", 00:22:08.246 "adrfam": "IPv4", 00:22:08.246 "traddr": "10.0.0.2", 00:22:08.246 "trsvcid": "4420" 00:22:08.246 }, 00:22:08.246 "peer_address": { 00:22:08.246 "trtype": "TCP", 00:22:08.246 "adrfam": "IPv4", 00:22:08.246 "traddr": "10.0.0.1", 00:22:08.246 "trsvcid": "44480" 00:22:08.246 }, 00:22:08.246 "auth": { 00:22:08.246 "state": "completed", 00:22:08.246 "digest": "sha512", 00:22:08.246 "dhgroup": "ffdhe2048" 00:22:08.246 } 00:22:08.246 } 00:22:08.246 ]' 00:22:08.246 22:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:08.503 22:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.503 22:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:08.503 22:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:08.503 22:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:08.503 22:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.503 22:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.503 22:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.776 22:05:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2E1NGIzMTcxYTNjZGNhNGZjYmJkYzk1YzExYTE5OWQ3Y2NkNDBmNzI5ZmE4M2JiS38vHQ==: --dhchap-ctrl-secret DHHC-1:01:NjljODkwZGZkMzg1YTVlOTJkNTg0OGRiY2VmZGJkOGVsy0VT: 00:22:09.709 22:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.709 22:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.709 22:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.709 22:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.709 22:05:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.709 22:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:09.709 22:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:09.709 22:05:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:09.966 22:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:22:09.966 22:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:09.966 22:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:09.966 22:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:22:09.966 22:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:09.966 22:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.966 22:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:09.966 22:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.966 22:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.966 22:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.966 22:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:09.966 22:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:10.224 00:22:10.224 22:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:10.224 22:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:10.224 22:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.518 22:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.518 22:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.518 22:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.518 22:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.518 22:05:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.518 22:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:10.518 { 00:22:10.518 "cntlid": 111, 00:22:10.518 "qid": 0, 00:22:10.518 "state": "enabled", 00:22:10.518 "thread": "nvmf_tgt_poll_group_000", 00:22:10.518 "listen_address": { 00:22:10.518 "trtype": "TCP", 00:22:10.518 "adrfam": "IPv4", 00:22:10.518 "traddr": "10.0.0.2", 00:22:10.518 "trsvcid": "4420" 00:22:10.518 }, 00:22:10.518 "peer_address": { 00:22:10.518 "trtype": "TCP", 00:22:10.518 "adrfam": "IPv4", 00:22:10.518 "traddr": "10.0.0.1", 00:22:10.518 "trsvcid": "44064" 00:22:10.518 }, 00:22:10.518 "auth": { 00:22:10.518 "state": "completed", 00:22:10.518 "digest": "sha512", 00:22:10.518 "dhgroup": "ffdhe2048" 00:22:10.518 } 00:22:10.518 } 00:22:10.518 ]' 00:22:10.518 22:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:10.518 22:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.518 22:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:10.518 22:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:10.518 22:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:10.518 22:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.518 22:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.518 22:05:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.794 22:05:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGIwMDcyMDlmODVmZWJkYzUwNDI4MzYzMDNlYTRkMmI4NDM1YzQ3ZTllYzcxMGRmZjk5NzE4NTlmOGY2ZGQ1Odfg3ic=: 00:22:12.177 22:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.178 22:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.178 22:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.178 22:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.178 22:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.178 22:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:12.178 22:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:12.178 22:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:12.178 22:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:12.178 22:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:22:12.178 22:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:12.178 22:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:12.178 22:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:12.178 22:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:12.178 22:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.178 22:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.178 22:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.178 22:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.178 22:05:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.178 22:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.178 22:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.436 00:22:12.436 22:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:12.436 22:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.436 22:05:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:12.693 22:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.693 22:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.693 22:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.693 22:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.693 22:05:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.693 22:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:12.693 { 00:22:12.693 "cntlid": 113, 00:22:12.693 "qid": 0, 00:22:12.693 "state": "enabled", 00:22:12.693 "thread": "nvmf_tgt_poll_group_000", 00:22:12.693 "listen_address": { 00:22:12.693 "trtype": "TCP", 00:22:12.693 "adrfam": "IPv4", 00:22:12.693 "traddr": "10.0.0.2", 00:22:12.693 "trsvcid": "4420" 00:22:12.693 }, 00:22:12.693 "peer_address": { 00:22:12.693 "trtype": "TCP", 00:22:12.693 "adrfam": "IPv4", 00:22:12.693 "traddr": "10.0.0.1", 00:22:12.694 "trsvcid": "44090" 00:22:12.694 }, 00:22:12.694 "auth": { 00:22:12.694 "state": "completed", 00:22:12.694 "digest": "sha512", 00:22:12.694 "dhgroup": "ffdhe3072" 00:22:12.694 } 00:22:12.694 } 00:22:12.694 ]' 00:22:12.694 22:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:12.951 22:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.951 22:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:12.951 22:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:12.951 22:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:12.951 22:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.951 22:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.951 22:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.210 22:05:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkyNzNiYTJlYmYzZmFmMTViZjM1OTRkMTU0MDg1MmEyZDQ0OGMzNDI5MjZiYmJlkzD1iw==: --dhchap-ctrl-secret DHHC-1:03:NjE4OWVkYmJkMGM0MzJhYmE2MTgyZDY0NDIxYWUwZDI3NGNlNWQ0NzMyZWU3YTg5OTNkY2E3NTc5NjRlY2FhN8pEGEs=: 00:22:14.144 22:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.144 22:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:14.144 22:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.144 22:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.144 22:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.144 22:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:14.144 22:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:14.144 22:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:14.401 22:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:22:14.401 22:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:14.401 22:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:14.401 22:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:14.401 22:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:14.401 22:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.401 22:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.401 22:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.401 22:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.401 22:05:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.401 22:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.401 22:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.658 00:22:14.658 22:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:14.658 22:05:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:14.658 22:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.915 22:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.915 22:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.915 22:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.915 22:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.915 22:05:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.915 22:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:14.915 { 00:22:14.915 "cntlid": 115, 00:22:14.915 "qid": 0, 00:22:14.915 "state": "enabled", 00:22:14.915 "thread": "nvmf_tgt_poll_group_000", 00:22:14.915 "listen_address": { 00:22:14.915 "trtype": "TCP", 00:22:14.915 "adrfam": "IPv4", 00:22:14.915 "traddr": "10.0.0.2", 00:22:14.915 "trsvcid": "4420" 00:22:14.915 }, 00:22:14.915 "peer_address": { 00:22:14.915 "trtype": "TCP", 00:22:14.915 "adrfam": "IPv4", 00:22:14.915 "traddr": "10.0.0.1", 00:22:14.915 "trsvcid": "44126" 00:22:14.915 }, 00:22:14.915 "auth": { 00:22:14.915 "state": "completed", 00:22:14.915 "digest": "sha512", 00:22:14.915 "dhgroup": "ffdhe3072" 00:22:14.915 } 00:22:14.915 } 00:22:14.916 ]' 00:22:14.916 22:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:15.173 22:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.173 22:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:15.173 22:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:15.173 22:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:15.173 22:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.173 22:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.173 22:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.431 22:05:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTVkYmQ3MzgyZDVkNWRhOWExYjBmNGVlYmUwZDg0MGJHHEcY: --dhchap-ctrl-secret DHHC-1:02:YmU3MzUxYmYxOWVmNTI1ZjIwYjFiMTk3Yjk2NzAzZmQwM2RmMjVkZjFlZWYwYWQ0E3C4iA==: 00:22:16.365 22:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.365 22:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:16.365 22:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.365 22:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.365 22:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.366 22:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:16.366 22:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:16.366 22:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:16.624 22:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:22:16.624 22:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:16.624 22:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:16.624 22:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:16.624 22:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:16.624 22:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.624 22:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:16.624 22:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.624 22:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.624 22:05:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.624 22:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:16.624 22:05:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:16.882 00:22:16.882 22:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:16.882 22:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:16.882 22:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.140 22:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.140 22:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.140 22:05:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.140 22:05:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.140 22:05:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.140 22:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:17.140 { 00:22:17.140 "cntlid": 117, 00:22:17.140 "qid": 0, 00:22:17.140 "state": "enabled", 00:22:17.140 "thread": "nvmf_tgt_poll_group_000", 00:22:17.140 "listen_address": { 00:22:17.140 "trtype": "TCP", 00:22:17.140 "adrfam": "IPv4", 00:22:17.140 "traddr": "10.0.0.2", 00:22:17.140 "trsvcid": "4420" 00:22:17.140 }, 00:22:17.140 "peer_address": { 00:22:17.140 "trtype": "TCP", 00:22:17.140 "adrfam": "IPv4", 00:22:17.140 "traddr": "10.0.0.1", 00:22:17.140 "trsvcid": "44150" 00:22:17.140 }, 00:22:17.140 "auth": { 00:22:17.140 "state": "completed", 00:22:17.140 "digest": "sha512", 00:22:17.140 "dhgroup": "ffdhe3072" 00:22:17.140 } 00:22:17.140 } 00:22:17.140 ]' 00:22:17.140 22:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:17.398 22:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:17.398 22:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:17.398 22:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:17.398 22:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:17.398 22:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.398 22:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.398 22:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.656 22:05:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2E1NGIzMTcxYTNjZGNhNGZjYmJkYzk1YzExYTE5OWQ3Y2NkNDBmNzI5ZmE4M2JiS38vHQ==: --dhchap-ctrl-secret DHHC-1:01:NjljODkwZGZkMzg1YTVlOTJkNTg0OGRiY2VmZGJkOGVsy0VT: 00:22:18.594 22:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.594 22:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:18.594 22:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.594 22:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.594 22:05:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.594 22:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:18.594 22:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:18.594 22:05:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:18.852 22:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:22:18.852 22:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:18.852 22:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:18.852 22:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:22:18.852 22:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:18.852 22:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.852 22:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:18.852 22:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.852 22:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.852 22:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.852 22:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:18.852 22:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:19.110 00:22:19.110 22:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:19.110 22:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.110 22:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:19.369 22:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.369 22:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.369 22:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.369 22:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.369 22:05:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.369 22:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:19.369 { 00:22:19.369 "cntlid": 119, 00:22:19.369 "qid": 0, 00:22:19.369 "state": "enabled", 00:22:19.369 "thread": "nvmf_tgt_poll_group_000", 00:22:19.369 "listen_address": { 00:22:19.369 "trtype": "TCP", 00:22:19.369 "adrfam": "IPv4", 00:22:19.369 "traddr": "10.0.0.2", 00:22:19.369 "trsvcid": "4420" 00:22:19.369 }, 00:22:19.369 "peer_address": { 00:22:19.369 "trtype": "TCP", 00:22:19.369 "adrfam": "IPv4", 00:22:19.369 "traddr": "10.0.0.1", 00:22:19.369 "trsvcid": "34748" 00:22:19.369 }, 00:22:19.369 "auth": { 00:22:19.369 "state": "completed", 00:22:19.369 "digest": "sha512", 00:22:19.369 "dhgroup": "ffdhe3072" 00:22:19.369 } 00:22:19.369 } 00:22:19.369 ]' 00:22:19.369 22:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:19.369 22:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:19.369 22:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:19.369 22:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:19.627 22:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:19.627 22:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.627 22:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.627 22:05:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.887 22:05:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGIwMDcyMDlmODVmZWJkYzUwNDI4MzYzMDNlYTRkMmI4NDM1YzQ3ZTllYzcxMGRmZjk5NzE4NTlmOGY2ZGQ1Odfg3ic=: 00:22:20.823 22:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.823 22:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.823 22:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.823 22:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.823 22:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.823 22:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:20.823 22:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:20.823 22:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:20.823 22:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:21.082 22:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:22:21.082 22:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:21.082 22:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:21.082 22:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:21.082 22:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:21.082 22:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.082 22:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.082 22:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.082 22:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.082 22:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.082 22:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.082 22:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.341 00:22:21.341 22:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:21.341 22:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:21.341 22:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.600 22:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.600 22:05:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.600 22:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.600 22:05:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.858 22:05:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.858 22:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:21.858 { 00:22:21.858 "cntlid": 121, 00:22:21.858 "qid": 0, 00:22:21.858 "state": "enabled", 00:22:21.858 "thread": "nvmf_tgt_poll_group_000", 00:22:21.859 "listen_address": { 00:22:21.859 "trtype": "TCP", 00:22:21.859 "adrfam": "IPv4", 00:22:21.859 "traddr": "10.0.0.2", 00:22:21.859 "trsvcid": "4420" 00:22:21.859 }, 00:22:21.859 "peer_address": { 00:22:21.859 "trtype": "TCP", 00:22:21.859 "adrfam": "IPv4", 00:22:21.859 "traddr": "10.0.0.1", 00:22:21.859 "trsvcid": "34772" 00:22:21.859 }, 00:22:21.859 "auth": { 00:22:21.859 "state": "completed", 00:22:21.859 "digest": "sha512", 00:22:21.859 "dhgroup": "ffdhe4096" 00:22:21.859 } 00:22:21.859 } 00:22:21.859 ]' 00:22:21.859 22:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:21.859 22:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:21.859 22:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:21.859 22:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:21.859 22:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:21.859 22:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.859 22:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.859 22:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.116 22:05:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkyNzNiYTJlYmYzZmFmMTViZjM1OTRkMTU0MDg1MmEyZDQ0OGMzNDI5MjZiYmJlkzD1iw==: --dhchap-ctrl-secret DHHC-1:03:NjE4OWVkYmJkMGM0MzJhYmE2MTgyZDY0NDIxYWUwZDI3NGNlNWQ0NzMyZWU3YTg5OTNkY2E3NTc5NjRlY2FhN8pEGEs=: 00:22:23.050 22:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.050 22:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:23.050 22:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.050 22:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.050 22:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.050 22:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:23.050 22:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:23.050 22:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:23.308 22:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:22:23.308 22:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:23.308 22:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:23.308 22:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:23.308 22:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:23.308 22:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.308 22:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.308 22:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.308 22:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.308 22:05:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.308 22:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.308 22:05:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.873 00:22:23.873 22:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:23.873 22:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:23.873 22:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.130 22:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.130 22:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.130 22:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.130 22:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.130 22:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.130 22:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:24.130 { 00:22:24.130 "cntlid": 123, 00:22:24.130 "qid": 0, 00:22:24.130 "state": "enabled", 00:22:24.130 "thread": "nvmf_tgt_poll_group_000", 00:22:24.130 "listen_address": { 00:22:24.130 "trtype": "TCP", 00:22:24.130 "adrfam": "IPv4", 00:22:24.130 "traddr": "10.0.0.2", 00:22:24.130 "trsvcid": "4420" 00:22:24.130 }, 00:22:24.130 "peer_address": { 00:22:24.130 "trtype": "TCP", 00:22:24.130 "adrfam": "IPv4", 00:22:24.130 "traddr": "10.0.0.1", 00:22:24.130 "trsvcid": "34812" 00:22:24.130 }, 00:22:24.130 "auth": { 00:22:24.130 "state": "completed", 00:22:24.130 "digest": "sha512", 00:22:24.130 "dhgroup": "ffdhe4096" 00:22:24.130 } 00:22:24.130 } 00:22:24.130 ]' 00:22:24.130 22:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:24.130 22:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:24.130 22:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:24.130 22:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:24.130 22:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:24.130 22:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.130 22:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.130 22:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.405 22:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTVkYmQ3MzgyZDVkNWRhOWExYjBmNGVlYmUwZDg0MGJHHEcY: --dhchap-ctrl-secret DHHC-1:02:YmU3MzUxYmYxOWVmNTI1ZjIwYjFiMTk3Yjk2NzAzZmQwM2RmMjVkZjFlZWYwYWQ0E3C4iA==: 00:22:25.368 22:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.368 22:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:25.368 22:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.368 22:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.368 22:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.368 22:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:25.368 22:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:25.368 22:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:25.626 22:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:22:25.626 22:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:25.626 22:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:25.626 22:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:25.626 22:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:25.626 22:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.626 22:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.626 22:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.626 22:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.626 22:05:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.626 22:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.626 22:05:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:26.190 00:22:26.190 22:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:26.190 22:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:26.190 22:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.448 22:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.448 22:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.448 22:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.448 22:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.448 22:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.448 22:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:26.448 { 00:22:26.448 "cntlid": 125, 00:22:26.448 "qid": 0, 00:22:26.448 "state": "enabled", 00:22:26.448 "thread": "nvmf_tgt_poll_group_000", 00:22:26.448 "listen_address": { 00:22:26.448 "trtype": "TCP", 00:22:26.448 "adrfam": "IPv4", 00:22:26.448 "traddr": "10.0.0.2", 00:22:26.448 "trsvcid": "4420" 00:22:26.448 }, 00:22:26.448 "peer_address": { 00:22:26.448 "trtype": "TCP", 00:22:26.448 "adrfam": "IPv4", 00:22:26.448 "traddr": "10.0.0.1", 00:22:26.448 "trsvcid": "34832" 00:22:26.448 }, 00:22:26.448 "auth": { 00:22:26.448 "state": "completed", 00:22:26.448 "digest": "sha512", 00:22:26.448 "dhgroup": "ffdhe4096" 00:22:26.448 } 00:22:26.448 } 00:22:26.448 ]' 00:22:26.448 22:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:26.448 22:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:26.448 22:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:26.448 22:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:26.448 22:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:26.448 22:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.448 22:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.448 22:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.706 22:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2E1NGIzMTcxYTNjZGNhNGZjYmJkYzk1YzExYTE5OWQ3Y2NkNDBmNzI5ZmE4M2JiS38vHQ==: --dhchap-ctrl-secret DHHC-1:01:NjljODkwZGZkMzg1YTVlOTJkNTg0OGRiY2VmZGJkOGVsy0VT: 00:22:27.638 22:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.638 22:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:27.638 22:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.638 22:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.638 22:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.638 22:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:27.638 22:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:27.638 22:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:27.897 22:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:22:27.897 22:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:27.897 22:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:27.897 22:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:22:27.897 22:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:27.897 22:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.897 22:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:27.897 22:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.897 22:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.897 22:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.897 22:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:27.897 22:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:28.463 00:22:28.463 22:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:28.463 22:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.463 22:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:28.721 22:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.721 22:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.721 22:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.721 22:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.721 22:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.721 22:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:28.721 { 00:22:28.721 "cntlid": 127, 00:22:28.721 "qid": 0, 00:22:28.721 "state": "enabled", 00:22:28.721 "thread": "nvmf_tgt_poll_group_000", 00:22:28.721 "listen_address": { 00:22:28.721 "trtype": "TCP", 00:22:28.721 "adrfam": "IPv4", 00:22:28.721 "traddr": "10.0.0.2", 00:22:28.721 "trsvcid": "4420" 00:22:28.721 }, 00:22:28.721 "peer_address": { 00:22:28.721 "trtype": "TCP", 00:22:28.721 "adrfam": "IPv4", 00:22:28.721 "traddr": "10.0.0.1", 00:22:28.721 "trsvcid": "34852" 00:22:28.721 }, 00:22:28.721 "auth": { 00:22:28.721 "state": "completed", 00:22:28.722 "digest": "sha512", 00:22:28.722 "dhgroup": "ffdhe4096" 00:22:28.722 } 00:22:28.722 } 00:22:28.722 ]' 00:22:28.722 22:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:28.722 22:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:28.722 22:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:28.722 22:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:28.722 22:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:28.722 22:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.722 22:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.722 22:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.980 22:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGIwMDcyMDlmODVmZWJkYzUwNDI4MzYzMDNlYTRkMmI4NDM1YzQ3ZTllYzcxMGRmZjk5NzE4NTlmOGY2ZGQ1Odfg3ic=: 00:22:29.915 22:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.915 22:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:29.915 22:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.915 22:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.915 22:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.915 22:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:29.915 22:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:29.915 22:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:29.915 22:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:30.200 22:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:22:30.200 22:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:30.200 22:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:30.200 22:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:30.200 22:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:30.200 22:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.200 22:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.200 22:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.200 22:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.200 22:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.200 22:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.200 22:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:30.765 00:22:30.765 22:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:30.765 22:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:30.765 22:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.022 22:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.022 22:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.022 22:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.022 22:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.023 22:05:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.023 22:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:31.023 { 00:22:31.023 "cntlid": 129, 00:22:31.023 "qid": 0, 00:22:31.023 "state": "enabled", 00:22:31.023 "thread": "nvmf_tgt_poll_group_000", 00:22:31.023 "listen_address": { 00:22:31.023 "trtype": "TCP", 00:22:31.023 "adrfam": "IPv4", 00:22:31.023 "traddr": "10.0.0.2", 00:22:31.023 "trsvcid": "4420" 00:22:31.023 }, 00:22:31.023 "peer_address": { 00:22:31.023 "trtype": "TCP", 00:22:31.023 "adrfam": "IPv4", 00:22:31.023 "traddr": "10.0.0.1", 00:22:31.023 "trsvcid": "52078" 00:22:31.023 }, 00:22:31.023 "auth": { 00:22:31.023 "state": "completed", 00:22:31.023 "digest": "sha512", 00:22:31.023 "dhgroup": "ffdhe6144" 00:22:31.023 } 00:22:31.023 } 00:22:31.023 ]' 00:22:31.023 22:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:31.023 22:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:31.023 22:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:31.023 22:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:31.023 22:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:31.280 22:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.280 22:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.280 22:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.537 22:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkyNzNiYTJlYmYzZmFmMTViZjM1OTRkMTU0MDg1MmEyZDQ0OGMzNDI5MjZiYmJlkzD1iw==: --dhchap-ctrl-secret DHHC-1:03:NjE4OWVkYmJkMGM0MzJhYmE2MTgyZDY0NDIxYWUwZDI3NGNlNWQ0NzMyZWU3YTg5OTNkY2E3NTc5NjRlY2FhN8pEGEs=: 00:22:32.468 22:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.468 22:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:32.468 22:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.468 22:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.468 22:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.468 22:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:32.468 22:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:32.468 22:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:32.725 22:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:22:32.725 22:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:32.725 22:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:32.725 22:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:32.725 22:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:32.725 22:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.725 22:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.725 22:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.725 22:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.725 22:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.725 22:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.725 22:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.290 00:22:33.290 22:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:33.290 22:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:33.290 22:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.548 22:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.548 22:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.548 22:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.548 22:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.548 22:05:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.548 22:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:33.548 { 00:22:33.548 "cntlid": 131, 00:22:33.548 "qid": 0, 00:22:33.548 "state": "enabled", 00:22:33.548 "thread": "nvmf_tgt_poll_group_000", 00:22:33.548 "listen_address": { 00:22:33.548 "trtype": "TCP", 00:22:33.548 "adrfam": "IPv4", 00:22:33.548 "traddr": "10.0.0.2", 00:22:33.548 "trsvcid": "4420" 00:22:33.548 }, 00:22:33.548 "peer_address": { 00:22:33.548 "trtype": "TCP", 00:22:33.548 "adrfam": "IPv4", 00:22:33.548 "traddr": "10.0.0.1", 00:22:33.548 "trsvcid": "52108" 00:22:33.548 }, 00:22:33.548 "auth": { 00:22:33.548 "state": "completed", 00:22:33.548 "digest": "sha512", 00:22:33.548 "dhgroup": "ffdhe6144" 00:22:33.548 } 00:22:33.548 } 00:22:33.548 ]' 00:22:33.548 22:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:33.548 22:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:33.548 22:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:33.548 22:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:33.548 22:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:33.548 22:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.548 22:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.548 22:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.806 22:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTVkYmQ3MzgyZDVkNWRhOWExYjBmNGVlYmUwZDg0MGJHHEcY: --dhchap-ctrl-secret DHHC-1:02:YmU3MzUxYmYxOWVmNTI1ZjIwYjFiMTk3Yjk2NzAzZmQwM2RmMjVkZjFlZWYwYWQ0E3C4iA==: 00:22:35.177 22:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.177 22:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:35.177 22:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.177 22:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.177 22:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.177 22:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:35.177 22:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:35.177 22:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:35.177 22:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:22:35.177 22:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:35.177 22:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:35.177 22:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:35.177 22:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:35.177 22:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.177 22:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:35.177 22:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.177 22:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.177 22:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.177 22:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:35.177 22:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:35.742 00:22:35.742 22:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:35.742 22:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:35.743 22:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.000 22:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.000 22:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.000 22:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.000 22:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.000 22:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.000 22:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:36.000 { 00:22:36.000 "cntlid": 133, 00:22:36.000 "qid": 0, 00:22:36.000 "state": "enabled", 00:22:36.000 "thread": "nvmf_tgt_poll_group_000", 00:22:36.000 "listen_address": { 00:22:36.000 "trtype": "TCP", 00:22:36.000 "adrfam": "IPv4", 00:22:36.000 "traddr": "10.0.0.2", 00:22:36.000 "trsvcid": "4420" 00:22:36.000 }, 00:22:36.000 "peer_address": { 00:22:36.000 "trtype": "TCP", 00:22:36.000 "adrfam": "IPv4", 00:22:36.000 "traddr": "10.0.0.1", 00:22:36.000 "trsvcid": "52128" 00:22:36.000 }, 00:22:36.000 "auth": { 00:22:36.000 "state": "completed", 00:22:36.001 "digest": "sha512", 00:22:36.001 "dhgroup": "ffdhe6144" 00:22:36.001 } 00:22:36.001 } 00:22:36.001 ]' 00:22:36.001 22:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:36.001 22:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:36.001 22:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:36.001 22:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:36.001 22:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:36.001 22:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.001 22:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.001 22:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.258 22:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2E1NGIzMTcxYTNjZGNhNGZjYmJkYzk1YzExYTE5OWQ3Y2NkNDBmNzI5ZmE4M2JiS38vHQ==: --dhchap-ctrl-secret DHHC-1:01:NjljODkwZGZkMzg1YTVlOTJkNTg0OGRiY2VmZGJkOGVsy0VT: 00:22:37.192 22:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.192 22:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.192 22:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.192 22:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.192 22:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.192 22:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:37.192 22:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:37.192 22:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:37.451 22:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:22:37.451 22:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:37.451 22:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:37.451 22:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:22:37.451 22:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:37.451 22:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:37.451 22:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:37.451 22:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.451 22:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.451 22:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.451 22:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:37.451 22:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:38.017 00:22:38.017 22:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:38.017 22:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:38.017 22:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.275 22:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.275 22:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.275 22:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:38.275 22:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.275 22:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:38.275 22:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:38.275 { 00:22:38.275 "cntlid": 135, 00:22:38.275 "qid": 0, 00:22:38.275 "state": "enabled", 00:22:38.275 "thread": "nvmf_tgt_poll_group_000", 00:22:38.275 "listen_address": { 00:22:38.275 "trtype": "TCP", 00:22:38.275 "adrfam": "IPv4", 00:22:38.275 "traddr": "10.0.0.2", 00:22:38.275 "trsvcid": "4420" 00:22:38.275 }, 00:22:38.275 "peer_address": { 00:22:38.275 "trtype": "TCP", 00:22:38.275 "adrfam": "IPv4", 00:22:38.275 "traddr": "10.0.0.1", 00:22:38.275 "trsvcid": "52144" 00:22:38.275 }, 00:22:38.275 "auth": { 00:22:38.275 "state": "completed", 00:22:38.275 "digest": "sha512", 00:22:38.275 "dhgroup": "ffdhe6144" 00:22:38.275 } 00:22:38.275 } 00:22:38.275 ]' 00:22:38.275 22:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:38.571 22:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:38.571 22:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:38.571 22:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:38.571 22:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:38.571 22:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.571 22:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.571 22:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.850 22:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGIwMDcyMDlmODVmZWJkYzUwNDI4MzYzMDNlYTRkMmI4NDM1YzQ3ZTllYzcxMGRmZjk5NzE4NTlmOGY2ZGQ1Odfg3ic=: 00:22:39.784 22:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.784 22:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:39.784 22:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.784 22:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.784 22:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.784 22:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:39.784 22:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:39.784 22:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:39.784 22:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:40.042 22:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:22:40.042 22:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:40.042 22:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:40.042 22:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:40.042 22:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:40.042 22:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:40.042 22:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.042 22:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.042 22:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.042 22:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.042 22:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.042 22:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:40.976 00:22:40.976 22:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:40.976 22:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:40.976 22:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.976 22:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.976 22:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.976 22:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.976 22:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.976 22:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.976 22:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:40.976 { 00:22:40.976 "cntlid": 137, 00:22:40.976 "qid": 0, 00:22:40.976 "state": "enabled", 00:22:40.976 "thread": "nvmf_tgt_poll_group_000", 00:22:40.976 "listen_address": { 00:22:40.976 "trtype": "TCP", 00:22:40.976 "adrfam": "IPv4", 00:22:40.976 "traddr": "10.0.0.2", 00:22:40.976 "trsvcid": "4420" 00:22:40.976 }, 00:22:40.976 "peer_address": { 00:22:40.976 "trtype": "TCP", 00:22:40.976 "adrfam": "IPv4", 00:22:40.976 "traddr": "10.0.0.1", 00:22:40.976 "trsvcid": "55220" 00:22:40.976 }, 00:22:40.976 "auth": { 00:22:40.976 "state": "completed", 00:22:40.976 "digest": "sha512", 00:22:40.976 "dhgroup": "ffdhe8192" 00:22:40.976 } 00:22:40.976 } 00:22:40.976 ]' 00:22:40.976 22:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:40.976 22:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:40.976 22:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:41.234 22:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:41.234 22:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:41.234 22:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:41.234 22:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.234 22:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.492 22:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkyNzNiYTJlYmYzZmFmMTViZjM1OTRkMTU0MDg1MmEyZDQ0OGMzNDI5MjZiYmJlkzD1iw==: --dhchap-ctrl-secret DHHC-1:03:NjE4OWVkYmJkMGM0MzJhYmE2MTgyZDY0NDIxYWUwZDI3NGNlNWQ0NzMyZWU3YTg5OTNkY2E3NTc5NjRlY2FhN8pEGEs=: 00:22:42.427 22:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.427 22:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:42.427 22:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.427 22:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.427 22:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.427 22:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:42.427 22:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:42.427 22:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:42.685 22:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:22:42.685 22:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:42.685 22:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:42.685 22:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:42.685 22:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:42.685 22:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.685 22:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.685 22:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.685 22:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.685 22:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.685 22:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:42.685 22:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.619 00:22:43.619 22:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:43.619 22:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.619 22:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:43.877 22:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.877 22:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.877 22:06:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.877 22:06:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.877 22:06:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.877 22:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:43.877 { 00:22:43.877 "cntlid": 139, 00:22:43.877 "qid": 0, 00:22:43.877 "state": "enabled", 00:22:43.877 "thread": "nvmf_tgt_poll_group_000", 00:22:43.877 "listen_address": { 00:22:43.877 "trtype": "TCP", 00:22:43.877 "adrfam": "IPv4", 00:22:43.877 "traddr": "10.0.0.2", 00:22:43.877 "trsvcid": "4420" 00:22:43.877 }, 00:22:43.877 "peer_address": { 00:22:43.877 "trtype": "TCP", 00:22:43.877 "adrfam": "IPv4", 00:22:43.877 "traddr": "10.0.0.1", 00:22:43.877 "trsvcid": "55238" 00:22:43.877 }, 00:22:43.877 "auth": { 00:22:43.877 "state": "completed", 00:22:43.877 "digest": "sha512", 00:22:43.877 "dhgroup": "ffdhe8192" 00:22:43.877 } 00:22:43.877 } 00:22:43.877 ]' 00:22:43.877 22:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:43.877 22:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:43.877 22:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:43.877 22:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:43.877 22:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:43.877 22:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.877 22:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.877 22:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.135 22:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:OTVkYmQ3MzgyZDVkNWRhOWExYjBmNGVlYmUwZDg0MGJHHEcY: --dhchap-ctrl-secret DHHC-1:02:YmU3MzUxYmYxOWVmNTI1ZjIwYjFiMTk3Yjk2NzAzZmQwM2RmMjVkZjFlZWYwYWQ0E3C4iA==: 00:22:45.067 22:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:45.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:45.067 22:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:45.067 22:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.067 22:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.067 22:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.067 22:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:45.067 22:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:45.067 22:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:45.325 22:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:22:45.325 22:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:45.325 22:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:45.325 22:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:45.325 22:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:45.325 22:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:45.325 22:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:45.325 22:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.325 22:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.325 22:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.325 22:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:45.325 22:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:46.258 00:22:46.258 22:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:46.258 22:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.258 22:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:46.516 22:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.516 22:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.516 22:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.516 22:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.516 22:06:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.516 22:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:46.516 { 00:22:46.516 "cntlid": 141, 00:22:46.516 "qid": 0, 00:22:46.516 "state": "enabled", 00:22:46.516 "thread": "nvmf_tgt_poll_group_000", 00:22:46.516 "listen_address": { 00:22:46.516 "trtype": "TCP", 00:22:46.516 "adrfam": "IPv4", 00:22:46.516 "traddr": "10.0.0.2", 00:22:46.516 "trsvcid": "4420" 00:22:46.516 }, 00:22:46.516 "peer_address": { 00:22:46.516 "trtype": "TCP", 00:22:46.516 "adrfam": "IPv4", 00:22:46.516 "traddr": "10.0.0.1", 00:22:46.516 "trsvcid": "55270" 00:22:46.516 }, 00:22:46.516 "auth": { 00:22:46.516 "state": "completed", 00:22:46.516 "digest": "sha512", 00:22:46.516 "dhgroup": "ffdhe8192" 00:22:46.516 } 00:22:46.516 } 00:22:46.516 ]' 00:22:46.516 22:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:46.516 22:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:46.516 22:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:46.774 22:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:46.774 22:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:46.774 22:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:46.774 22:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.774 22:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.031 22:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:M2E1NGIzMTcxYTNjZGNhNGZjYmJkYzk1YzExYTE5OWQ3Y2NkNDBmNzI5ZmE4M2JiS38vHQ==: --dhchap-ctrl-secret DHHC-1:01:NjljODkwZGZkMzg1YTVlOTJkNTg0OGRiY2VmZGJkOGVsy0VT: 00:22:47.962 22:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.962 22:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:47.962 22:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.962 22:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.962 22:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.962 22:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:47.962 22:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:47.962 22:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:48.220 22:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:22:48.220 22:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:48.220 22:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:48.220 22:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:48.220 22:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:48.220 22:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:48.220 22:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:48.220 22:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.220 22:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.220 22:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.220 22:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:48.220 22:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:49.154 00:22:49.154 22:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:49.154 22:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.154 22:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:49.412 22:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.412 22:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.412 22:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.412 22:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.412 22:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.412 22:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:49.412 { 00:22:49.412 "cntlid": 143, 00:22:49.412 "qid": 0, 00:22:49.412 "state": "enabled", 00:22:49.412 "thread": "nvmf_tgt_poll_group_000", 00:22:49.412 "listen_address": { 00:22:49.412 "trtype": "TCP", 00:22:49.412 "adrfam": "IPv4", 00:22:49.412 "traddr": "10.0.0.2", 00:22:49.412 "trsvcid": "4420" 00:22:49.412 }, 00:22:49.412 "peer_address": { 00:22:49.412 "trtype": "TCP", 00:22:49.412 "adrfam": "IPv4", 00:22:49.412 "traddr": "10.0.0.1", 00:22:49.412 "trsvcid": "42786" 00:22:49.412 }, 00:22:49.412 "auth": { 00:22:49.412 "state": "completed", 00:22:49.412 "digest": "sha512", 00:22:49.412 "dhgroup": "ffdhe8192" 00:22:49.412 } 00:22:49.412 } 00:22:49.412 ]' 00:22:49.412 22:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:49.412 22:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:49.412 22:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:49.412 22:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:49.412 22:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:49.412 22:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:49.412 22:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.412 22:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.670 22:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGIwMDcyMDlmODVmZWJkYzUwNDI4MzYzMDNlYTRkMmI4NDM1YzQ3ZTllYzcxMGRmZjk5NzE4NTlmOGY2ZGQ1Odfg3ic=: 00:22:50.602 22:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.602 22:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:50.602 22:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.602 22:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.602 22:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.602 22:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:50.602 22:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:50.602 22:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:50.602 22:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:50.602 22:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:50.602 22:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:50.860 22:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:50.860 22:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:50.860 22:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:50.860 22:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:50.860 22:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:50.860 22:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:50.860 22:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:50.860 22:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.860 22:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.860 22:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.860 22:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:50.860 22:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.795 00:22:51.795 22:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:51.795 22:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:51.795 22:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.053 22:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.053 22:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:52.053 22:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.053 22:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.053 22:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.053 22:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:52.053 { 00:22:52.053 "cntlid": 145, 00:22:52.053 "qid": 0, 00:22:52.053 "state": "enabled", 00:22:52.053 "thread": "nvmf_tgt_poll_group_000", 00:22:52.053 "listen_address": { 00:22:52.053 "trtype": "TCP", 00:22:52.053 "adrfam": "IPv4", 00:22:52.053 "traddr": "10.0.0.2", 00:22:52.053 "trsvcid": "4420" 00:22:52.053 }, 00:22:52.053 "peer_address": { 00:22:52.053 "trtype": "TCP", 00:22:52.053 "adrfam": "IPv4", 00:22:52.053 "traddr": "10.0.0.1", 00:22:52.053 "trsvcid": "42810" 00:22:52.053 }, 00:22:52.053 "auth": { 00:22:52.053 "state": "completed", 00:22:52.053 "digest": "sha512", 00:22:52.053 "dhgroup": "ffdhe8192" 00:22:52.053 } 00:22:52.053 } 00:22:52.053 ]' 00:22:52.053 22:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:52.053 22:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:52.053 22:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:52.053 22:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:52.053 22:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:52.053 22:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:52.053 22:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:52.053 22:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:52.311 22:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ODkyNzNiYTJlYmYzZmFmMTViZjM1OTRkMTU0MDg1MmEyZDQ0OGMzNDI5MjZiYmJlkzD1iw==: --dhchap-ctrl-secret DHHC-1:03:NjE4OWVkYmJkMGM0MzJhYmE2MTgyZDY0NDIxYWUwZDI3NGNlNWQ0NzMyZWU3YTg5OTNkY2E3NTc5NjRlY2FhN8pEGEs=: 00:22:53.326 22:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:53.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:53.326 22:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:53.326 22:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.326 22:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.326 22:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.326 22:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:53.326 22:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.326 22:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.326 22:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.326 22:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:53.326 22:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:53.326 22:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:53.326 22:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:53.326 22:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:53.326 22:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:53.326 22:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:53.326 22:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:53.326 22:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:54.261 request: 00:22:54.261 { 00:22:54.261 "name": "nvme0", 00:22:54.261 "trtype": "tcp", 00:22:54.261 "traddr": "10.0.0.2", 00:22:54.261 "adrfam": "ipv4", 00:22:54.261 "trsvcid": "4420", 00:22:54.261 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:54.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:54.261 "prchk_reftag": false, 00:22:54.261 "prchk_guard": false, 00:22:54.261 "hdgst": false, 00:22:54.261 "ddgst": false, 00:22:54.261 "dhchap_key": "key2", 00:22:54.261 "method": "bdev_nvme_attach_controller", 00:22:54.261 "req_id": 1 00:22:54.261 } 00:22:54.261 Got JSON-RPC error response 00:22:54.261 response: 00:22:54.261 { 00:22:54.261 "code": -5, 00:22:54.261 "message": "Input/output error" 00:22:54.261 } 00:22:54.261 22:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:54.261 22:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:54.261 22:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:54.261 22:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:54.261 22:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:54.261 22:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.261 22:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.261 22:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.261 22:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:54.261 22:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.261 22:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.261 22:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.261 22:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:54.261 22:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:54.261 22:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:54.261 22:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:54.261 22:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:54.261 22:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:54.261 22:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:54.261 22:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:54.261 22:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:55.196 request: 00:22:55.196 { 00:22:55.196 "name": "nvme0", 00:22:55.196 "trtype": "tcp", 00:22:55.196 "traddr": "10.0.0.2", 00:22:55.196 "adrfam": "ipv4", 00:22:55.196 "trsvcid": "4420", 00:22:55.196 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:55.196 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:55.196 "prchk_reftag": false, 00:22:55.196 "prchk_guard": false, 00:22:55.196 "hdgst": false, 00:22:55.196 "ddgst": false, 00:22:55.196 "dhchap_key": "key1", 00:22:55.196 "dhchap_ctrlr_key": "ckey2", 00:22:55.196 "method": "bdev_nvme_attach_controller", 00:22:55.196 "req_id": 1 00:22:55.196 } 00:22:55.196 Got JSON-RPC error response 00:22:55.196 response: 00:22:55.196 { 00:22:55.196 "code": -5, 00:22:55.196 "message": "Input/output error" 00:22:55.196 } 00:22:55.196 22:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:55.196 22:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:55.196 22:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:55.196 22:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:55.196 22:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:55.196 22:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.196 22:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.196 22:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.196 22:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:55.196 22:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.196 22:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.196 22:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.196 22:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.196 22:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:55.196 22:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.196 22:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:55.196 22:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:55.196 22:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:55.196 22:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:55.196 22:06:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.196 22:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.131 request: 00:22:56.131 { 00:22:56.131 "name": "nvme0", 00:22:56.131 "trtype": "tcp", 00:22:56.131 "traddr": "10.0.0.2", 00:22:56.131 "adrfam": "ipv4", 00:22:56.131 "trsvcid": "4420", 00:22:56.131 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:56.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:56.131 "prchk_reftag": false, 00:22:56.131 "prchk_guard": false, 00:22:56.131 "hdgst": false, 00:22:56.131 "ddgst": false, 00:22:56.131 "dhchap_key": "key1", 00:22:56.131 "dhchap_ctrlr_key": "ckey1", 00:22:56.131 "method": "bdev_nvme_attach_controller", 00:22:56.131 "req_id": 1 00:22:56.131 } 00:22:56.131 Got JSON-RPC error response 00:22:56.131 response: 00:22:56.131 { 00:22:56.131 "code": -5, 00:22:56.131 "message": "Input/output error" 00:22:56.131 } 00:22:56.132 22:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:56.132 22:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:56.132 22:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:56.132 22:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:56.132 22:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:56.132 22:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.132 22:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.132 22:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.132 22:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 4083218 00:22:56.132 22:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 4083218 ']' 00:22:56.132 22:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 4083218 00:22:56.132 22:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:56.132 22:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:56.132 22:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4083218 00:22:56.132 22:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:56.132 22:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:56.132 22:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4083218' 00:22:56.132 killing process with pid 4083218 00:22:56.132 22:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 4083218 00:22:56.132 22:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 4083218 00:22:57.505 22:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:57.505 22:06:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:57.505 22:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:57.505 22:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.505 22:06:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=4106007 00:22:57.505 22:06:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:57.505 22:06:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 4106007 00:22:57.505 22:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4106007 ']' 00:22:57.505 22:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.505 22:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:57.505 22:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.505 22:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:57.505 22:06:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.438 22:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:58.438 22:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:58.438 22:06:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:58.438 22:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:58.438 22:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.438 22:06:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.438 22:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:58.438 22:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 4106007 00:22:58.438 22:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4106007 ']' 00:22:58.438 22:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.438 22:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:58.438 22:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.438 22:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:58.438 22:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.697 22:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:58.697 22:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:58.697 22:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:58.697 22:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.697 22:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.955 22:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.955 22:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:58.955 22:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:58.955 22:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:58.955 22:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:58.955 22:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:58.955 22:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:58.955 22:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:58.955 22:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.955 22:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.955 22:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.956 22:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:58.956 22:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:59.888 00:22:59.888 22:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:59.888 22:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:59.888 22:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.146 22:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.146 22:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:00.146 22:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.146 22:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.146 22:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.146 22:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:23:00.146 { 00:23:00.146 "cntlid": 1, 00:23:00.146 "qid": 0, 00:23:00.146 "state": "enabled", 00:23:00.146 "thread": "nvmf_tgt_poll_group_000", 00:23:00.146 "listen_address": { 00:23:00.146 "trtype": "TCP", 00:23:00.146 "adrfam": "IPv4", 00:23:00.146 "traddr": "10.0.0.2", 00:23:00.146 "trsvcid": "4420" 00:23:00.146 }, 00:23:00.146 "peer_address": { 00:23:00.146 "trtype": "TCP", 00:23:00.146 "adrfam": "IPv4", 00:23:00.146 "traddr": "10.0.0.1", 00:23:00.146 "trsvcid": "36238" 00:23:00.146 }, 00:23:00.146 "auth": { 00:23:00.146 "state": "completed", 00:23:00.146 "digest": "sha512", 00:23:00.146 "dhgroup": "ffdhe8192" 00:23:00.146 } 00:23:00.146 } 00:23:00.146 ]' 00:23:00.146 22:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:23:00.146 22:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:00.146 22:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:23:00.146 22:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:00.146 22:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:23:00.403 22:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:00.403 22:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:00.403 22:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:00.661 22:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MGIwMDcyMDlmODVmZWJkYzUwNDI4MzYzMDNlYTRkMmI4NDM1YzQ3ZTllYzcxMGRmZjk5NzE4NTlmOGY2ZGQ1Odfg3ic=: 00:23:01.593 22:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:01.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:01.593 22:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:01.593 22:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.593 22:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.593 22:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.593 22:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:23:01.593 22:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.593 22:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.593 22:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.593 22:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:01.593 22:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:01.851 22:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:01.851 22:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:01.851 22:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:01.851 22:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:01.851 22:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:01.851 22:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:01.851 22:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:01.851 22:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:01.851 22:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:02.109 request: 00:23:02.109 { 00:23:02.109 "name": "nvme0", 00:23:02.109 "trtype": "tcp", 00:23:02.109 "traddr": "10.0.0.2", 00:23:02.109 "adrfam": "ipv4", 00:23:02.109 "trsvcid": "4420", 00:23:02.109 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:02.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:02.109 "prchk_reftag": false, 00:23:02.109 "prchk_guard": false, 00:23:02.109 "hdgst": false, 00:23:02.109 "ddgst": false, 00:23:02.109 "dhchap_key": "key3", 00:23:02.109 "method": "bdev_nvme_attach_controller", 00:23:02.109 "req_id": 1 00:23:02.109 } 00:23:02.109 Got JSON-RPC error response 00:23:02.109 response: 00:23:02.109 { 00:23:02.109 "code": -5, 00:23:02.109 "message": "Input/output error" 00:23:02.109 } 00:23:02.109 22:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:02.109 22:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:02.109 22:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:02.109 22:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:02.109 22:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:23:02.109 22:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:23:02.109 22:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:02.109 22:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:02.366 22:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:02.366 22:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:02.366 22:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:02.366 22:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:02.366 22:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:02.366 22:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:02.366 22:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:02.366 22:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:02.366 22:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:02.624 request: 00:23:02.624 { 00:23:02.624 "name": "nvme0", 00:23:02.624 "trtype": "tcp", 00:23:02.624 "traddr": "10.0.0.2", 00:23:02.624 "adrfam": "ipv4", 00:23:02.624 "trsvcid": "4420", 00:23:02.624 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:02.624 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:02.624 "prchk_reftag": false, 00:23:02.624 "prchk_guard": false, 00:23:02.624 "hdgst": false, 00:23:02.624 "ddgst": false, 00:23:02.624 "dhchap_key": "key3", 00:23:02.624 "method": "bdev_nvme_attach_controller", 00:23:02.624 "req_id": 1 00:23:02.624 } 00:23:02.624 Got JSON-RPC error response 00:23:02.624 response: 00:23:02.624 { 00:23:02.624 "code": -5, 00:23:02.624 "message": "Input/output error" 00:23:02.624 } 00:23:02.624 22:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:02.624 22:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:02.624 22:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:02.624 22:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:02.624 22:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:23:02.624 22:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:23:02.624 22:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:23:02.624 22:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:02.624 22:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:02.624 22:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:02.882 22:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:02.882 22:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.882 22:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.882 22:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.882 22:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:02.882 22:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.882 22:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.882 22:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.882 22:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:02.882 22:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:23:02.882 22:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:02.882 22:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:23:02.882 22:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:02.882 22:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:23:02.882 22:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:02.882 22:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:02.882 22:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:03.140 request: 00:23:03.140 { 00:23:03.140 "name": "nvme0", 00:23:03.140 "trtype": "tcp", 00:23:03.140 "traddr": "10.0.0.2", 00:23:03.140 "adrfam": "ipv4", 00:23:03.140 "trsvcid": "4420", 00:23:03.140 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:03.140 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:03.140 "prchk_reftag": false, 00:23:03.140 "prchk_guard": false, 00:23:03.140 "hdgst": false, 00:23:03.140 "ddgst": false, 00:23:03.140 "dhchap_key": "key0", 00:23:03.140 "dhchap_ctrlr_key": "key1", 00:23:03.140 "method": "bdev_nvme_attach_controller", 00:23:03.140 "req_id": 1 00:23:03.140 } 00:23:03.140 Got JSON-RPC error response 00:23:03.140 response: 00:23:03.140 { 00:23:03.140 "code": -5, 00:23:03.140 "message": "Input/output error" 00:23:03.140 } 00:23:03.140 22:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:23:03.140 22:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:03.140 22:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:03.140 22:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:03.140 22:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:03.140 22:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:03.397 00:23:03.397 22:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:23:03.397 22:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:23:03.397 22:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:03.654 22:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.654 22:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:03.654 22:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:03.912 22:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:23:03.912 22:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:23:03.912 22:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 4083369 00:23:03.912 22:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 4083369 ']' 00:23:03.912 22:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 4083369 00:23:03.912 22:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:23:03.912 22:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:03.912 22:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4083369 00:23:03.912 22:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:03.912 22:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:03.912 22:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4083369' 00:23:03.912 killing process with pid 4083369 00:23:03.912 22:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 4083369 00:23:03.912 22:06:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 4083369 00:23:06.441 22:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:06.441 22:06:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:06.441 22:06:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:23:06.441 22:06:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:06.441 22:06:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:23:06.441 22:06:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:06.441 22:06:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:06.441 rmmod nvme_tcp 00:23:06.441 rmmod nvme_fabrics 00:23:06.441 rmmod nvme_keyring 00:23:06.441 22:06:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:06.441 22:06:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:23:06.441 22:06:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:23:06.441 22:06:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 4106007 ']' 00:23:06.441 22:06:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 4106007 00:23:06.441 22:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 4106007 ']' 00:23:06.441 22:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 4106007 00:23:06.441 22:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:23:06.441 22:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:06.441 22:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4106007 00:23:06.441 22:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:06.441 22:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:06.441 22:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4106007' 00:23:06.441 killing process with pid 4106007 00:23:06.441 22:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 4106007 00:23:06.441 22:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 4106007 00:23:07.815 22:06:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:07.816 22:06:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:07.816 22:06:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:07.816 22:06:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:07.816 22:06:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:07.816 22:06:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.816 22:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:07.816 22:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.756 22:06:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:09.756 22:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.SXl /tmp/spdk.key-sha256.NQa /tmp/spdk.key-sha384.eZA /tmp/spdk.key-sha512.Pok /tmp/spdk.key-sha512.u78 /tmp/spdk.key-sha384.ySA /tmp/spdk.key-sha256.6S5 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:09.756 00:23:09.756 real 3m15.322s 00:23:09.756 user 7m30.102s 00:23:09.756 sys 0m24.830s 00:23:09.756 22:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:09.756 22:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.756 ************************************ 00:23:09.756 END TEST nvmf_auth_target 00:23:09.756 ************************************ 00:23:09.756 22:06:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:09.756 22:06:29 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:23:09.756 22:06:29 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:09.756 22:06:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:23:09.756 22:06:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:09.756 22:06:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:09.756 ************************************ 00:23:09.756 START TEST nvmf_bdevio_no_huge 00:23:09.756 ************************************ 00:23:09.756 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:10.015 * Looking for test storage... 00:23:10.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:23:10.015 22:06:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:11.915 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:11.916 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:11.916 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:11.916 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:11.916 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:11.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:11.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:23:11.916 00:23:11.916 --- 10.0.0.2 ping statistics --- 00:23:11.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.916 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:11.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:11.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:23:11.916 00:23:11.916 --- 10.0.0.1 ping statistics --- 00:23:11.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.916 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=4109176 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 4109176 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 4109176 ']' 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:11.916 22:06:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:12.174 [2024-07-13 22:06:31.318889] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:12.174 [2024-07-13 22:06:31.319043] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:12.174 [2024-07-13 22:06:31.479788] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:12.431 [2024-07-13 22:06:31.763077] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.432 [2024-07-13 22:06:31.763158] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.432 [2024-07-13 22:06:31.763201] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.432 [2024-07-13 22:06:31.763223] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.432 [2024-07-13 22:06:31.763258] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.432 [2024-07-13 22:06:31.763403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:12.432 [2024-07-13 22:06:31.763528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:23:12.432 [2024-07-13 22:06:31.763603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:12.432 [2024-07-13 22:06:31.763624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:12.997 [2024-07-13 22:06:32.282314] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:12.997 Malloc0 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:12.997 [2024-07-13 22:06:32.372041] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:12.997 { 00:23:12.997 "params": { 00:23:12.997 "name": "Nvme$subsystem", 00:23:12.997 "trtype": "$TEST_TRANSPORT", 00:23:12.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.997 "adrfam": "ipv4", 00:23:12.997 "trsvcid": "$NVMF_PORT", 00:23:12.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.997 "hdgst": ${hdgst:-false}, 00:23:12.997 "ddgst": ${ddgst:-false} 00:23:12.997 }, 00:23:12.997 "method": "bdev_nvme_attach_controller" 00:23:12.997 } 00:23:12.997 EOF 00:23:12.997 )") 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:23:12.997 22:06:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:12.997 "params": { 00:23:12.997 "name": "Nvme1", 00:23:12.997 "trtype": "tcp", 00:23:12.997 "traddr": "10.0.0.2", 00:23:12.997 "adrfam": "ipv4", 00:23:12.997 "trsvcid": "4420", 00:23:12.997 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.997 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:12.997 "hdgst": false, 00:23:12.997 "ddgst": false 00:23:12.997 }, 00:23:12.997 "method": "bdev_nvme_attach_controller" 00:23:12.997 }' 00:23:13.255 [2024-07-13 22:06:32.452064] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:13.255 [2024-07-13 22:06:32.452206] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid4109339 ] 00:23:13.255 [2024-07-13 22:06:32.596613] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:13.512 [2024-07-13 22:06:32.852602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:13.512 [2024-07-13 22:06:32.852648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.512 [2024-07-13 22:06:32.852654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.076 I/O targets: 00:23:14.076 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:14.076 00:23:14.076 00:23:14.076 CUnit - A unit testing framework for C - Version 2.1-3 00:23:14.076 http://cunit.sourceforge.net/ 00:23:14.076 00:23:14.076 00:23:14.076 Suite: bdevio tests on: Nvme1n1 00:23:14.076 Test: blockdev write read block ...passed 00:23:14.333 Test: blockdev write zeroes read block ...passed 00:23:14.333 Test: blockdev write zeroes read no split ...passed 00:23:14.333 Test: blockdev write zeroes read split ...passed 00:23:14.333 Test: blockdev write zeroes read split partial ...passed 00:23:14.333 Test: blockdev reset ...[2024-07-13 22:06:33.669172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:14.333 [2024-07-13 22:06:33.669371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f1100 (9): Bad file descriptor 00:23:14.333 [2024-07-13 22:06:33.686740] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:14.333 passed 00:23:14.589 Test: blockdev write read 8 blocks ...passed 00:23:14.590 Test: blockdev write read size > 128k ...passed 00:23:14.590 Test: blockdev write read invalid size ...passed 00:23:14.590 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:14.590 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:14.590 Test: blockdev write read max offset ...passed 00:23:14.590 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:14.590 Test: blockdev writev readv 8 blocks ...passed 00:23:14.590 Test: blockdev writev readv 30 x 1block ...passed 00:23:14.590 Test: blockdev writev readv block ...passed 00:23:14.590 Test: blockdev writev readv size > 128k ...passed 00:23:14.590 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:14.590 Test: blockdev comparev and writev ...[2024-07-13 22:06:33.950095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:14.590 [2024-07-13 22:06:33.950168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.590 [2024-07-13 22:06:33.950208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:14.590 [2024-07-13 22:06:33.950235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:14.590 [2024-07-13 22:06:33.950741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:14.590 [2024-07-13 22:06:33.950786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:14.590 [2024-07-13 22:06:33.950828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:14.590 [2024-07-13 22:06:33.950858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:14.590 [2024-07-13 22:06:33.951354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:14.590 [2024-07-13 22:06:33.951388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:14.590 [2024-07-13 22:06:33.951428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:14.590 [2024-07-13 22:06:33.951454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:14.590 [2024-07-13 22:06:33.951941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:14.590 [2024-07-13 22:06:33.951974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:14.590 [2024-07-13 22:06:33.952008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:14.590 [2024-07-13 22:06:33.952042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:14.847 passed 00:23:14.847 Test: blockdev nvme passthru rw ...passed 00:23:14.847 Test: blockdev nvme passthru vendor specific ...[2024-07-13 22:06:34.036393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:14.847 [2024-07-13 22:06:34.036454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:14.847 [2024-07-13 22:06:34.036750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:14.847 [2024-07-13 22:06:34.036783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:14.847 [2024-07-13 22:06:34.037049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:14.847 [2024-07-13 22:06:34.037081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:14.847 [2024-07-13 22:06:34.037324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:14.847 [2024-07-13 22:06:34.037356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:14.847 passed 00:23:14.847 Test: blockdev nvme admin passthru ...passed 00:23:14.847 Test: blockdev copy ...passed 00:23:14.847 00:23:14.847 Run Summary: Type Total Ran Passed Failed Inactive 00:23:14.847 suites 1 1 n/a 0 0 00:23:14.847 tests 23 23 23 0 0 00:23:14.847 asserts 152 152 152 0 n/a 00:23:14.847 00:23:14.847 Elapsed time = 1.359 seconds 00:23:15.780 22:06:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:15.780 22:06:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.780 22:06:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:15.780 22:06:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.780 22:06:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:15.780 22:06:34 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:15.780 22:06:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:15.780 22:06:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:23:15.780 22:06:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:15.780 22:06:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:23:15.780 22:06:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:15.780 22:06:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:15.780 rmmod nvme_tcp 00:23:15.780 rmmod nvme_fabrics 00:23:15.780 rmmod nvme_keyring 00:23:15.780 22:06:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:15.780 22:06:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:23:15.780 22:06:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:23:15.780 22:06:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 4109176 ']' 00:23:15.780 22:06:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 4109176 00:23:15.780 22:06:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 4109176 ']' 00:23:15.780 22:06:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 4109176 00:23:15.780 22:06:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:23:15.780 22:06:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:15.780 22:06:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4109176 00:23:15.780 22:06:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:23:15.780 22:06:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:23:15.780 22:06:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4109176' 00:23:15.780 killing process with pid 4109176 00:23:15.780 22:06:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 4109176 00:23:15.780 22:06:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 4109176 00:23:16.714 22:06:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:16.714 22:06:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:16.714 22:06:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:16.714 22:06:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:16.714 22:06:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:16.714 22:06:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.714 22:06:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:16.714 22:06:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.617 22:06:37 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:18.617 00:23:18.617 real 0m8.721s 00:23:18.617 user 0m20.402s 00:23:18.617 sys 0m2.807s 00:23:18.617 22:06:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:18.617 22:06:37 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:18.617 ************************************ 00:23:18.617 END TEST nvmf_bdevio_no_huge 00:23:18.617 ************************************ 00:23:18.617 22:06:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:18.617 22:06:37 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:18.617 22:06:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:18.617 22:06:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:18.617 22:06:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:18.617 ************************************ 00:23:18.617 START TEST nvmf_tls 00:23:18.617 ************************************ 00:23:18.617 22:06:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:18.617 * Looking for test storage... 00:23:18.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:18.617 22:06:37 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:18.617 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:18.617 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.617 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.617 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.617 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.617 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.617 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.617 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.617 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.617 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.617 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.617 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:18.617 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:18.617 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.617 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.617 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:23:18.618 22:06:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:20.522 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:20.522 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:20.522 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:20.522 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:20.522 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:20.781 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:20.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:23:20.781 00:23:20.781 --- 10.0.0.2 ping statistics --- 00:23:20.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.781 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:23:20.781 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:20.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:23:20.781 00:23:20.781 --- 10.0.0.1 ping statistics --- 00:23:20.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.781 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:23:20.781 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.781 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:23:20.781 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:20.781 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.781 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:20.781 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:20.781 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.781 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:20.781 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:20.781 22:06:39 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:20.781 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:20.781 22:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:20.781 22:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.781 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4111540 00:23:20.781 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:20.781 22:06:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4111540 00:23:20.781 22:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4111540 ']' 00:23:20.781 22:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.781 22:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:20.781 22:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.781 22:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:20.781 22:06:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.781 [2024-07-13 22:06:40.044539] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:20.781 [2024-07-13 22:06:40.044690] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.781 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.039 [2024-07-13 22:06:40.181070] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.298 [2024-07-13 22:06:40.436355] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.298 [2024-07-13 22:06:40.436446] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.298 [2024-07-13 22:06:40.436475] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.298 [2024-07-13 22:06:40.436500] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.298 [2024-07-13 22:06:40.436521] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.298 [2024-07-13 22:06:40.436576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.865 22:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:21.865 22:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:21.865 22:06:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:21.865 22:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:21.865 22:06:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.865 22:06:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.865 22:06:40 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:23:21.865 22:06:40 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:22.123 true 00:23:22.123 22:06:41 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:22.123 22:06:41 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:23:22.382 22:06:41 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:23:22.382 22:06:41 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:23:22.382 22:06:41 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:22.641 22:06:41 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:22.641 22:06:41 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:23:22.641 22:06:42 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:23:22.641 22:06:42 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:23:22.641 22:06:42 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:22.900 22:06:42 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:22.900 22:06:42 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:23:23.159 22:06:42 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:23:23.159 22:06:42 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:23:23.159 22:06:42 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:23.159 22:06:42 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:23:23.417 22:06:42 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:23:23.417 22:06:42 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:23:23.417 22:06:42 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:23.690 22:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:23.690 22:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:23:24.014 22:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:23:24.014 22:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:23:24.014 22:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:24.277 22:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:24.277 22:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:23:24.534 22:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:23:24.534 22:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:23:24.534 22:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:24.534 22:06:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:24.534 22:06:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:24.534 22:06:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:24.534 22:06:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:24.534 22:06:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:24.534 22:06:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:24.534 22:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:24.534 22:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:24.534 22:06:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:24.534 22:06:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:24.534 22:06:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:24.534 22:06:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:23:24.534 22:06:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:23:24.534 22:06:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:24.792 22:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:24.792 22:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:23:24.792 22:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.ztHFgK0Ktg 00:23:24.792 22:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:24.792 22:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.VsM9DKYJU3 00:23:24.792 22:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:24.792 22:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:24.792 22:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.ztHFgK0Ktg 00:23:24.792 22:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.VsM9DKYJU3 00:23:24.792 22:06:43 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:25.050 22:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:25.616 22:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.ztHFgK0Ktg 00:23:25.616 22:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ztHFgK0Ktg 00:23:25.616 22:06:44 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:25.874 [2024-07-13 22:06:45.074881] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.874 22:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:26.132 22:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:26.390 [2024-07-13 22:06:45.564267] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:26.390 [2024-07-13 22:06:45.564569] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.390 22:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:26.648 malloc0 00:23:26.648 22:06:45 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:26.906 22:06:46 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ztHFgK0Ktg 00:23:27.164 [2024-07-13 22:06:46.351878] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:27.164 22:06:46 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ztHFgK0Ktg 00:23:27.164 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.142 Initializing NVMe Controllers 00:23:37.143 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:37.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:37.143 Initialization complete. Launching workers. 00:23:37.143 ======================================================== 00:23:37.143 Latency(us) 00:23:37.143 Device Information : IOPS MiB/s Average min max 00:23:37.143 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5707.25 22.29 11217.86 2243.57 13112.78 00:23:37.143 ======================================================== 00:23:37.143 Total : 5707.25 22.29 11217.86 2243.57 13112.78 00:23:37.143 00:23:37.400 22:06:56 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ztHFgK0Ktg 00:23:37.400 22:06:56 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:37.400 22:06:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:37.400 22:06:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:37.400 22:06:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ztHFgK0Ktg' 00:23:37.400 22:06:56 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:37.400 22:06:56 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4113562 00:23:37.400 22:06:56 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:37.400 22:06:56 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:37.400 22:06:56 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4113562 /var/tmp/bdevperf.sock 00:23:37.400 22:06:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4113562 ']' 00:23:37.400 22:06:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.400 22:06:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:37.400 22:06:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.400 22:06:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:37.400 22:06:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.401 [2024-07-13 22:06:56.666522] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:37.401 [2024-07-13 22:06:56.666674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4113562 ] 00:23:37.401 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.401 [2024-07-13 22:06:56.788581] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.658 [2024-07-13 22:06:57.010741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.592 22:06:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:38.592 22:06:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:38.592 22:06:57 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ztHFgK0Ktg 00:23:38.592 [2024-07-13 22:06:57.852432] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:38.592 [2024-07-13 22:06:57.852626] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:38.592 TLSTESTn1 00:23:38.592 22:06:57 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:38.850 Running I/O for 10 seconds... 00:23:48.819 00:23:48.819 Latency(us) 00:23:48.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.819 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:48.819 Verification LBA range: start 0x0 length 0x2000 00:23:48.819 TLSTESTn1 : 10.04 2274.40 8.88 0.00 0.00 56143.15 10971.21 85827.89 00:23:48.819 =================================================================================================================== 00:23:48.819 Total : 2274.40 8.88 0.00 0.00 56143.15 10971.21 85827.89 00:23:48.819 0 00:23:48.819 22:07:08 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:48.819 22:07:08 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 4113562 00:23:48.819 22:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4113562 ']' 00:23:48.819 22:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4113562 00:23:48.819 22:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:48.819 22:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:48.819 22:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4113562 00:23:48.819 22:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:48.819 22:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:48.819 22:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4113562' 00:23:48.819 killing process with pid 4113562 00:23:48.819 22:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4113562 00:23:48.819 Received shutdown signal, test time was about 10.000000 seconds 00:23:48.819 00:23:48.819 Latency(us) 00:23:48.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.819 =================================================================================================================== 00:23:48.819 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:48.819 [2024-07-13 22:07:08.188972] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:48.819 22:07:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4113562 00:23:49.754 22:07:09 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VsM9DKYJU3 00:23:49.754 22:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:49.754 22:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VsM9DKYJU3 00:23:49.754 22:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:49.754 22:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:49.754 22:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:49.754 22:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:49.754 22:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VsM9DKYJU3 00:23:49.754 22:07:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:49.754 22:07:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:49.754 22:07:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:49.754 22:07:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.VsM9DKYJU3' 00:23:49.754 22:07:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:49.754 22:07:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4115012 00:23:49.754 22:07:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:49.754 22:07:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:49.754 22:07:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4115012 /var/tmp/bdevperf.sock 00:23:49.754 22:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4115012 ']' 00:23:49.754 22:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:49.754 22:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:49.754 22:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:49.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:49.755 22:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:50.013 22:07:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.013 [2024-07-13 22:07:09.225037] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:50.013 [2024-07-13 22:07:09.225185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4115012 ] 00:23:50.013 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.013 [2024-07-13 22:07:09.352418] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.271 [2024-07-13 22:07:09.584188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.845 22:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:50.845 22:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:50.845 22:07:10 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.VsM9DKYJU3 00:23:51.151 [2024-07-13 22:07:10.447659] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:51.151 [2024-07-13 22:07:10.447892] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:51.151 [2024-07-13 22:07:10.458290] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:51.151 [2024-07-13 22:07:10.459058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:51.151 [2024-07-13 22:07:10.460032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:51.151 [2024-07-13 22:07:10.461023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:51.151 [2024-07-13 22:07:10.461058] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:51.151 [2024-07-13 22:07:10.461085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:51.151 request: 00:23:51.151 { 00:23:51.151 "name": "TLSTEST", 00:23:51.151 "trtype": "tcp", 00:23:51.151 "traddr": "10.0.0.2", 00:23:51.151 "adrfam": "ipv4", 00:23:51.152 "trsvcid": "4420", 00:23:51.152 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.152 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:51.152 "prchk_reftag": false, 00:23:51.152 "prchk_guard": false, 00:23:51.152 "hdgst": false, 00:23:51.152 "ddgst": false, 00:23:51.152 "psk": "/tmp/tmp.VsM9DKYJU3", 00:23:51.152 "method": "bdev_nvme_attach_controller", 00:23:51.152 "req_id": 1 00:23:51.152 } 00:23:51.152 Got JSON-RPC error response 00:23:51.152 response: 00:23:51.152 { 00:23:51.152 "code": -5, 00:23:51.152 "message": "Input/output error" 00:23:51.152 } 00:23:51.152 22:07:10 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4115012 00:23:51.152 22:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4115012 ']' 00:23:51.152 22:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4115012 00:23:51.152 22:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:51.152 22:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:51.152 22:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4115012 00:23:51.152 22:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:51.152 22:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:51.152 22:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4115012' 00:23:51.152 killing process with pid 4115012 00:23:51.152 22:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4115012 00:23:51.152 Received shutdown signal, test time was about 10.000000 seconds 00:23:51.152 00:23:51.152 Latency(us) 00:23:51.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.152 =================================================================================================================== 00:23:51.152 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:51.152 [2024-07-13 22:07:10.507723] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:51.152 22:07:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4115012 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ztHFgK0Ktg 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ztHFgK0Ktg 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ztHFgK0Ktg 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ztHFgK0Ktg' 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4115286 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4115286 /var/tmp/bdevperf.sock 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4115286 ']' 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:52.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:52.090 22:07:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:52.349 [2024-07-13 22:07:11.533174] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:52.349 [2024-07-13 22:07:11.533354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4115286 ] 00:23:52.349 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.349 [2024-07-13 22:07:11.659214] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.609 [2024-07-13 22:07:11.885973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.175 22:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:53.175 22:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:53.175 22:07:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.ztHFgK0Ktg 00:23:53.434 [2024-07-13 22:07:12.751213] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:53.434 [2024-07-13 22:07:12.751427] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:53.434 [2024-07-13 22:07:12.763877] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:53.434 [2024-07-13 22:07:12.763926] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:53.434 [2024-07-13 22:07:12.764002] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:53.434 [2024-07-13 22:07:12.765048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:53.434 [2024-07-13 22:07:12.766021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:53.434 [2024-07-13 22:07:12.767013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:53.434 [2024-07-13 22:07:12.767050] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:53.434 [2024-07-13 22:07:12.767076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:53.434 request: 00:23:53.434 { 00:23:53.434 "name": "TLSTEST", 00:23:53.434 "trtype": "tcp", 00:23:53.434 "traddr": "10.0.0.2", 00:23:53.434 "adrfam": "ipv4", 00:23:53.434 "trsvcid": "4420", 00:23:53.434 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.434 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:53.434 "prchk_reftag": false, 00:23:53.434 "prchk_guard": false, 00:23:53.434 "hdgst": false, 00:23:53.434 "ddgst": false, 00:23:53.434 "psk": "/tmp/tmp.ztHFgK0Ktg", 00:23:53.434 "method": "bdev_nvme_attach_controller", 00:23:53.434 "req_id": 1 00:23:53.434 } 00:23:53.434 Got JSON-RPC error response 00:23:53.434 response: 00:23:53.434 { 00:23:53.434 "code": -5, 00:23:53.434 "message": "Input/output error" 00:23:53.434 } 00:23:53.434 22:07:12 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4115286 00:23:53.434 22:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4115286 ']' 00:23:53.434 22:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4115286 00:23:53.434 22:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:53.434 22:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:53.434 22:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4115286 00:23:53.434 22:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:53.434 22:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:53.434 22:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4115286' 00:23:53.434 killing process with pid 4115286 00:23:53.434 22:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4115286 00:23:53.434 Received shutdown signal, test time was about 10.000000 seconds 00:23:53.434 00:23:53.434 Latency(us) 00:23:53.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.434 =================================================================================================================== 00:23:53.434 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:53.434 [2024-07-13 22:07:12.813282] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:53.434 22:07:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4115286 00:23:54.407 22:07:13 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:54.407 22:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:54.407 22:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:54.407 22:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:54.407 22:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:54.407 22:07:13 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ztHFgK0Ktg 00:23:54.407 22:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:54.408 22:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ztHFgK0Ktg 00:23:54.408 22:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:54.408 22:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:54.408 22:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:54.408 22:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:54.408 22:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ztHFgK0Ktg 00:23:54.408 22:07:13 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:54.408 22:07:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:54.408 22:07:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:54.408 22:07:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ztHFgK0Ktg' 00:23:54.408 22:07:13 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:54.408 22:07:13 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4115559 00:23:54.408 22:07:13 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:54.408 22:07:13 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:54.408 22:07:13 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4115559 /var/tmp/bdevperf.sock 00:23:54.408 22:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4115559 ']' 00:23:54.408 22:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.408 22:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:54.408 22:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.408 22:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:54.408 22:07:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.667 [2024-07-13 22:07:13.857023] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:54.667 [2024-07-13 22:07:13.857184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4115559 ] 00:23:54.667 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.667 [2024-07-13 22:07:13.979935] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.926 [2024-07-13 22:07:14.206113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.491 22:07:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:55.491 22:07:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:55.491 22:07:14 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ztHFgK0Ktg 00:23:55.749 [2024-07-13 22:07:15.045186] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:55.749 [2024-07-13 22:07:15.045376] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:55.749 [2024-07-13 22:07:15.059777] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:55.749 [2024-07-13 22:07:15.059814] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:55.749 [2024-07-13 22:07:15.059917] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:55.749 [2024-07-13 22:07:15.060608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (107): Transport endpoint is not connected 00:23:55.749 [2024-07-13 22:07:15.061587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:23:55.749 [2024-07-13 22:07:15.062577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:55.749 [2024-07-13 22:07:15.062624] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:55.749 [2024-07-13 22:07:15.062651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:55.749 request: 00:23:55.749 { 00:23:55.749 "name": "TLSTEST", 00:23:55.749 "trtype": "tcp", 00:23:55.749 "traddr": "10.0.0.2", 00:23:55.749 "adrfam": "ipv4", 00:23:55.749 "trsvcid": "4420", 00:23:55.749 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:55.749 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:55.749 "prchk_reftag": false, 00:23:55.749 "prchk_guard": false, 00:23:55.749 "hdgst": false, 00:23:55.749 "ddgst": false, 00:23:55.749 "psk": "/tmp/tmp.ztHFgK0Ktg", 00:23:55.749 "method": "bdev_nvme_attach_controller", 00:23:55.749 "req_id": 1 00:23:55.749 } 00:23:55.749 Got JSON-RPC error response 00:23:55.749 response: 00:23:55.749 { 00:23:55.749 "code": -5, 00:23:55.749 "message": "Input/output error" 00:23:55.749 } 00:23:55.749 22:07:15 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4115559 00:23:55.749 22:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4115559 ']' 00:23:55.749 22:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4115559 00:23:55.749 22:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:55.749 22:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:55.749 22:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4115559 00:23:55.749 22:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:55.749 22:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:55.749 22:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4115559' 00:23:55.749 killing process with pid 4115559 00:23:55.750 22:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4115559 00:23:55.750 Received shutdown signal, test time was about 10.000000 seconds 00:23:55.750 00:23:55.750 Latency(us) 00:23:55.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.750 =================================================================================================================== 00:23:55.750 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:55.750 22:07:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4115559 00:23:55.750 [2024-07-13 22:07:15.105424] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4115834 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4115834 /var/tmp/bdevperf.sock 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4115834 ']' 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:56.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:56.682 22:07:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.942 [2024-07-13 22:07:16.110490] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:56.942 [2024-07-13 22:07:16.110632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4115834 ] 00:23:56.942 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.942 [2024-07-13 22:07:16.232933] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.202 [2024-07-13 22:07:16.452710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.768 22:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:57.768 22:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:57.768 22:07:17 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:58.027 [2024-07-13 22:07:17.325966] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:58.027 [2024-07-13 22:07:17.327787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2000 (9): Bad file descriptor 00:23:58.027 [2024-07-13 22:07:17.328781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.027 [2024-07-13 22:07:17.328811] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:58.027 [2024-07-13 22:07:17.328853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.027 request: 00:23:58.027 { 00:23:58.027 "name": "TLSTEST", 00:23:58.027 "trtype": "tcp", 00:23:58.027 "traddr": "10.0.0.2", 00:23:58.027 "adrfam": "ipv4", 00:23:58.027 "trsvcid": "4420", 00:23:58.027 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.027 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:58.027 "prchk_reftag": false, 00:23:58.027 "prchk_guard": false, 00:23:58.027 "hdgst": false, 00:23:58.027 "ddgst": false, 00:23:58.027 "method": "bdev_nvme_attach_controller", 00:23:58.027 "req_id": 1 00:23:58.027 } 00:23:58.027 Got JSON-RPC error response 00:23:58.027 response: 00:23:58.027 { 00:23:58.027 "code": -5, 00:23:58.027 "message": "Input/output error" 00:23:58.027 } 00:23:58.027 22:07:17 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4115834 00:23:58.027 22:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4115834 ']' 00:23:58.027 22:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4115834 00:23:58.027 22:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:58.027 22:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:58.027 22:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4115834 00:23:58.027 22:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:58.027 22:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:58.027 22:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4115834' 00:23:58.027 killing process with pid 4115834 00:23:58.027 22:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4115834 00:23:58.027 Received shutdown signal, test time was about 10.000000 seconds 00:23:58.027 00:23:58.027 Latency(us) 00:23:58.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.027 =================================================================================================================== 00:23:58.027 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:58.027 22:07:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4115834 00:23:58.963 22:07:18 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:58.963 22:07:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:58.963 22:07:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:58.963 22:07:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:58.963 22:07:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:58.963 22:07:18 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 4111540 00:23:58.963 22:07:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4111540 ']' 00:23:58.963 22:07:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4111540 00:23:58.963 22:07:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:58.963 22:07:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:58.963 22:07:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4111540 00:23:59.221 22:07:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:59.221 22:07:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:59.221 22:07:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4111540' 00:23:59.221 killing process with pid 4111540 00:23:59.221 22:07:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4111540 00:23:59.221 [2024-07-13 22:07:18.357959] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:59.221 22:07:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4111540 00:24:00.599 22:07:19 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:00.599 22:07:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:00.599 22:07:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:24:00.599 22:07:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:24:00.599 22:07:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:00.599 22:07:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:24:00.599 22:07:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:24:00.599 22:07:19 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:00.599 22:07:19 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:24:00.599 22:07:19 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.mdRZcftFe2 00:24:00.599 22:07:19 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:00.599 22:07:19 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.mdRZcftFe2 00:24:00.599 22:07:19 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:24:00.599 22:07:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:00.599 22:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:00.599 22:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.599 22:07:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4116254 00:24:00.599 22:07:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:00.599 22:07:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4116254 00:24:00.599 22:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4116254 ']' 00:24:00.599 22:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.599 22:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:00.599 22:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.599 22:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:00.599 22:07:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:00.599 [2024-07-13 22:07:19.972875] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:00.599 [2024-07-13 22:07:19.973024] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.858 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.858 [2024-07-13 22:07:20.121921] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.119 [2024-07-13 22:07:20.388403] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.119 [2024-07-13 22:07:20.388481] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.119 [2024-07-13 22:07:20.388512] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.119 [2024-07-13 22:07:20.388538] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.119 [2024-07-13 22:07:20.388561] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.119 [2024-07-13 22:07:20.388611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.686 22:07:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.687 22:07:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:01.687 22:07:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:01.687 22:07:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:01.687 22:07:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:01.687 22:07:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.687 22:07:20 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.mdRZcftFe2 00:24:01.687 22:07:20 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.mdRZcftFe2 00:24:01.687 22:07:20 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:01.945 [2024-07-13 22:07:21.133548] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.945 22:07:21 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:02.223 22:07:21 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:02.481 [2024-07-13 22:07:21.715168] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:02.481 [2024-07-13 22:07:21.715489] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.481 22:07:21 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:02.739 malloc0 00:24:02.739 22:07:22 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:02.997 22:07:22 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mdRZcftFe2 00:24:03.255 [2024-07-13 22:07:22.563693] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:03.255 22:07:22 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mdRZcftFe2 00:24:03.255 22:07:22 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:03.255 22:07:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:03.255 22:07:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:03.255 22:07:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.mdRZcftFe2' 00:24:03.255 22:07:22 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:03.255 22:07:22 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4116614 00:24:03.255 22:07:22 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:03.255 22:07:22 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:03.255 22:07:22 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4116614 /var/tmp/bdevperf.sock 00:24:03.255 22:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4116614 ']' 00:24:03.255 22:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:03.255 22:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:03.255 22:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:03.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:03.255 22:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:03.255 22:07:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.513 [2024-07-13 22:07:22.668253] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:03.513 [2024-07-13 22:07:22.668402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4116614 ] 00:24:03.513 EAL: No free 2048 kB hugepages reported on node 1 00:24:03.513 [2024-07-13 22:07:22.791667] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.772 [2024-07-13 22:07:23.013286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:04.341 22:07:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:04.341 22:07:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:04.341 22:07:23 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mdRZcftFe2 00:24:04.598 [2024-07-13 22:07:23.814998] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:04.598 [2024-07-13 22:07:23.815228] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:04.598 TLSTESTn1 00:24:04.598 22:07:23 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:04.856 Running I/O for 10 seconds... 00:24:14.843 00:24:14.843 Latency(us) 00:24:14.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.843 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:14.843 Verification LBA range: start 0x0 length 0x2000 00:24:14.843 TLSTESTn1 : 10.08 1445.38 5.65 0.00 0.00 88226.48 13204.29 117285.17 00:24:14.843 =================================================================================================================== 00:24:14.843 Total : 1445.38 5.65 0.00 0.00 88226.48 13204.29 117285.17 00:24:14.843 0 00:24:14.843 22:07:34 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:14.843 22:07:34 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 4116614 00:24:14.843 22:07:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4116614 ']' 00:24:14.843 22:07:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4116614 00:24:14.843 22:07:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:14.843 22:07:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:14.843 22:07:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4116614 00:24:14.843 22:07:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:14.843 22:07:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:14.843 22:07:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4116614' 00:24:14.843 killing process with pid 4116614 00:24:14.843 22:07:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4116614 00:24:14.843 Received shutdown signal, test time was about 10.000000 seconds 00:24:14.843 00:24:14.843 Latency(us) 00:24:14.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.843 =================================================================================================================== 00:24:14.843 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:14.843 [2024-07-13 22:07:34.182302] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:14.843 22:07:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4116614 00:24:15.782 22:07:35 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.mdRZcftFe2 00:24:15.782 22:07:35 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mdRZcftFe2 00:24:15.782 22:07:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:15.782 22:07:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mdRZcftFe2 00:24:15.782 22:07:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:24:15.782 22:07:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:15.782 22:07:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:24:15.782 22:07:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:15.782 22:07:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mdRZcftFe2 00:24:15.782 22:07:35 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:15.782 22:07:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:15.782 22:07:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:15.782 22:07:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.mdRZcftFe2' 00:24:15.782 22:07:35 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:15.782 22:07:35 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4118068 00:24:15.782 22:07:35 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:15.782 22:07:35 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:15.782 22:07:35 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4118068 /var/tmp/bdevperf.sock 00:24:15.782 22:07:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4118068 ']' 00:24:15.782 22:07:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:15.782 22:07:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:15.782 22:07:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:15.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:15.782 22:07:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:15.782 22:07:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:16.040 [2024-07-13 22:07:35.216392] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:16.040 [2024-07-13 22:07:35.216530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4118068 ] 00:24:16.040 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.040 [2024-07-13 22:07:35.338520] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.298 [2024-07-13 22:07:35.560278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:16.863 22:07:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:16.863 22:07:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:16.863 22:07:36 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mdRZcftFe2 00:24:17.123 [2024-07-13 22:07:36.370437] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:17.123 [2024-07-13 22:07:36.370541] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:17.123 [2024-07-13 22:07:36.370564] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.mdRZcftFe2 00:24:17.123 request: 00:24:17.123 { 00:24:17.123 "name": "TLSTEST", 00:24:17.123 "trtype": "tcp", 00:24:17.123 "traddr": "10.0.0.2", 00:24:17.123 "adrfam": "ipv4", 00:24:17.123 "trsvcid": "4420", 00:24:17.123 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.123 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:17.123 "prchk_reftag": false, 00:24:17.123 "prchk_guard": false, 00:24:17.123 "hdgst": false, 00:24:17.123 "ddgst": false, 00:24:17.123 "psk": "/tmp/tmp.mdRZcftFe2", 00:24:17.123 "method": "bdev_nvme_attach_controller", 00:24:17.123 "req_id": 1 00:24:17.123 } 00:24:17.123 Got JSON-RPC error response 00:24:17.123 response: 00:24:17.123 { 00:24:17.123 "code": -1, 00:24:17.123 "message": "Operation not permitted" 00:24:17.123 } 00:24:17.123 22:07:36 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4118068 00:24:17.123 22:07:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4118068 ']' 00:24:17.123 22:07:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4118068 00:24:17.123 22:07:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:17.123 22:07:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:17.123 22:07:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4118068 00:24:17.123 22:07:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:17.123 22:07:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:17.123 22:07:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4118068' 00:24:17.123 killing process with pid 4118068 00:24:17.123 22:07:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4118068 00:24:17.123 Received shutdown signal, test time was about 10.000000 seconds 00:24:17.123 00:24:17.123 Latency(us) 00:24:17.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.123 =================================================================================================================== 00:24:17.123 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:17.123 22:07:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4118068 00:24:18.061 22:07:37 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:24:18.061 22:07:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:18.061 22:07:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:18.061 22:07:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:18.061 22:07:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:18.061 22:07:37 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 4116254 00:24:18.061 22:07:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4116254 ']' 00:24:18.061 22:07:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4116254 00:24:18.061 22:07:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:18.061 22:07:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:18.061 22:07:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4116254 00:24:18.061 22:07:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:18.061 22:07:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:18.061 22:07:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4116254' 00:24:18.061 killing process with pid 4116254 00:24:18.061 22:07:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4116254 00:24:18.061 [2024-07-13 22:07:37.393513] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:18.061 22:07:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4116254 00:24:19.438 22:07:38 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:24:19.438 22:07:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:19.438 22:07:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:19.438 22:07:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.438 22:07:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4118526 00:24:19.438 22:07:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:19.438 22:07:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4118526 00:24:19.438 22:07:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4118526 ']' 00:24:19.438 22:07:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.438 22:07:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:19.438 22:07:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.438 22:07:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:19.438 22:07:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.696 [2024-07-13 22:07:38.860251] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:19.696 [2024-07-13 22:07:38.860399] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.696 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.696 [2024-07-13 22:07:39.004376] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.954 [2024-07-13 22:07:39.259741] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.954 [2024-07-13 22:07:39.259820] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.955 [2024-07-13 22:07:39.259863] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.955 [2024-07-13 22:07:39.259903] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.955 [2024-07-13 22:07:39.259926] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.955 [2024-07-13 22:07:39.259976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.522 22:07:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:20.522 22:07:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:20.522 22:07:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:20.522 22:07:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:20.522 22:07:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.522 22:07:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:20.522 22:07:39 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.mdRZcftFe2 00:24:20.522 22:07:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:24:20.522 22:07:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.mdRZcftFe2 00:24:20.522 22:07:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:24:20.522 22:07:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:20.522 22:07:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:24:20.522 22:07:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:20.522 22:07:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.mdRZcftFe2 00:24:20.522 22:07:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.mdRZcftFe2 00:24:20.522 22:07:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:20.782 [2024-07-13 22:07:40.022261] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.782 22:07:40 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:21.041 22:07:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:21.299 [2024-07-13 22:07:40.519637] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:21.299 [2024-07-13 22:07:40.520005] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.299 22:07:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:21.557 malloc0 00:24:21.557 22:07:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:21.816 22:07:41 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mdRZcftFe2 00:24:22.075 [2024-07-13 22:07:41.407081] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:24:22.075 [2024-07-13 22:07:41.407148] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:24:22.075 [2024-07-13 22:07:41.407207] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:22.075 request: 00:24:22.075 { 00:24:22.075 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.075 "host": "nqn.2016-06.io.spdk:host1", 00:24:22.075 "psk": "/tmp/tmp.mdRZcftFe2", 00:24:22.075 "method": "nvmf_subsystem_add_host", 00:24:22.075 "req_id": 1 00:24:22.075 } 00:24:22.075 Got JSON-RPC error response 00:24:22.075 response: 00:24:22.075 { 00:24:22.075 "code": -32603, 00:24:22.075 "message": "Internal error" 00:24:22.075 } 00:24:22.075 22:07:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:24:22.075 22:07:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:22.075 22:07:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:22.075 22:07:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:22.075 22:07:41 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 4118526 00:24:22.075 22:07:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4118526 ']' 00:24:22.075 22:07:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4118526 00:24:22.075 22:07:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:22.075 22:07:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:22.075 22:07:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4118526 00:24:22.075 22:07:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:22.075 22:07:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:22.075 22:07:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4118526' 00:24:22.075 killing process with pid 4118526 00:24:22.075 22:07:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4118526 00:24:22.075 22:07:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4118526 00:24:23.544 22:07:42 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.mdRZcftFe2 00:24:23.544 22:07:42 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:24:23.544 22:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:23.544 22:07:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:23.544 22:07:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.544 22:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4118971 00:24:23.544 22:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:23.544 22:07:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4118971 00:24:23.544 22:07:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4118971 ']' 00:24:23.544 22:07:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.544 22:07:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:23.544 22:07:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.544 22:07:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:23.544 22:07:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.802 [2024-07-13 22:07:42.956489] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:23.802 [2024-07-13 22:07:42.956625] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.802 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.802 [2024-07-13 22:07:43.089972] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.061 [2024-07-13 22:07:43.340681] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.061 [2024-07-13 22:07:43.340758] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.061 [2024-07-13 22:07:43.340788] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.061 [2024-07-13 22:07:43.340814] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.061 [2024-07-13 22:07:43.340836] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.061 [2024-07-13 22:07:43.340892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.625 22:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:24.625 22:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:24.625 22:07:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:24.625 22:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:24.625 22:07:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:24.625 22:07:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.625 22:07:43 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.mdRZcftFe2 00:24:24.625 22:07:43 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.mdRZcftFe2 00:24:24.625 22:07:43 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:24.883 [2024-07-13 22:07:44.119329] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.883 22:07:44 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:25.141 22:07:44 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:25.399 [2024-07-13 22:07:44.624807] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:25.399 [2024-07-13 22:07:44.625119] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:25.399 22:07:44 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:25.658 malloc0 00:24:25.658 22:07:44 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:25.916 22:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mdRZcftFe2 00:24:26.175 [2024-07-13 22:07:45.427955] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:26.175 22:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=4119379 00:24:26.175 22:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:26.175 22:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:26.175 22:07:45 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 4119379 /var/tmp/bdevperf.sock 00:24:26.175 22:07:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4119379 ']' 00:24:26.175 22:07:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:26.175 22:07:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:26.175 22:07:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:26.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:26.175 22:07:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:26.175 22:07:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.175 [2024-07-13 22:07:45.525673] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:26.175 [2024-07-13 22:07:45.525813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4119379 ] 00:24:26.434 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.434 [2024-07-13 22:07:45.647521] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.694 [2024-07-13 22:07:45.869449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:27.261 22:07:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:27.261 22:07:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:27.261 22:07:46 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mdRZcftFe2 00:24:27.519 [2024-07-13 22:07:46.667955] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:27.519 [2024-07-13 22:07:46.668174] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:27.519 TLSTESTn1 00:24:27.519 22:07:46 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:27.778 22:07:47 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:24:27.778 "subsystems": [ 00:24:27.778 { 00:24:27.778 "subsystem": "keyring", 00:24:27.778 "config": [] 00:24:27.778 }, 00:24:27.778 { 00:24:27.778 "subsystem": "iobuf", 00:24:27.778 "config": [ 00:24:27.778 { 00:24:27.778 "method": "iobuf_set_options", 00:24:27.778 "params": { 00:24:27.778 "small_pool_count": 8192, 00:24:27.778 "large_pool_count": 1024, 00:24:27.778 "small_bufsize": 8192, 00:24:27.778 "large_bufsize": 135168 00:24:27.778 } 00:24:27.778 } 00:24:27.778 ] 00:24:27.778 }, 00:24:27.778 { 00:24:27.778 "subsystem": "sock", 00:24:27.778 "config": [ 00:24:27.778 { 00:24:27.778 "method": "sock_set_default_impl", 00:24:27.778 "params": { 00:24:27.778 "impl_name": "posix" 00:24:27.778 } 00:24:27.778 }, 00:24:27.778 { 00:24:27.778 "method": "sock_impl_set_options", 00:24:27.778 "params": { 00:24:27.778 "impl_name": "ssl", 00:24:27.778 "recv_buf_size": 4096, 00:24:27.778 "send_buf_size": 4096, 00:24:27.778 "enable_recv_pipe": true, 00:24:27.778 "enable_quickack": false, 00:24:27.778 "enable_placement_id": 0, 00:24:27.778 "enable_zerocopy_send_server": true, 00:24:27.778 "enable_zerocopy_send_client": false, 00:24:27.778 "zerocopy_threshold": 0, 00:24:27.778 "tls_version": 0, 00:24:27.778 "enable_ktls": false 00:24:27.778 } 00:24:27.778 }, 00:24:27.778 { 00:24:27.778 "method": "sock_impl_set_options", 00:24:27.778 "params": { 00:24:27.778 "impl_name": "posix", 00:24:27.778 "recv_buf_size": 2097152, 00:24:27.779 "send_buf_size": 2097152, 00:24:27.779 "enable_recv_pipe": true, 00:24:27.779 "enable_quickack": false, 00:24:27.779 "enable_placement_id": 0, 00:24:27.779 "enable_zerocopy_send_server": true, 00:24:27.779 "enable_zerocopy_send_client": false, 00:24:27.779 "zerocopy_threshold": 0, 00:24:27.779 "tls_version": 0, 00:24:27.779 "enable_ktls": false 00:24:27.779 } 00:24:27.779 } 00:24:27.779 ] 00:24:27.779 }, 00:24:27.779 { 00:24:27.779 "subsystem": "vmd", 00:24:27.779 "config": [] 00:24:27.779 }, 00:24:27.779 { 00:24:27.779 "subsystem": "accel", 00:24:27.779 "config": [ 00:24:27.779 { 00:24:27.779 "method": "accel_set_options", 00:24:27.779 "params": { 00:24:27.779 "small_cache_size": 128, 00:24:27.779 "large_cache_size": 16, 00:24:27.779 "task_count": 2048, 00:24:27.779 "sequence_count": 2048, 00:24:27.779 "buf_count": 2048 00:24:27.779 } 00:24:27.779 } 00:24:27.779 ] 00:24:27.779 }, 00:24:27.779 { 00:24:27.779 "subsystem": "bdev", 00:24:27.779 "config": [ 00:24:27.779 { 00:24:27.779 "method": "bdev_set_options", 00:24:27.779 "params": { 00:24:27.779 "bdev_io_pool_size": 65535, 00:24:27.779 "bdev_io_cache_size": 256, 00:24:27.779 "bdev_auto_examine": true, 00:24:27.779 "iobuf_small_cache_size": 128, 00:24:27.779 "iobuf_large_cache_size": 16 00:24:27.779 } 00:24:27.779 }, 00:24:27.779 { 00:24:27.779 "method": "bdev_raid_set_options", 00:24:27.779 "params": { 00:24:27.779 "process_window_size_kb": 1024 00:24:27.779 } 00:24:27.779 }, 00:24:27.779 { 00:24:27.779 "method": "bdev_iscsi_set_options", 00:24:27.779 "params": { 00:24:27.779 "timeout_sec": 30 00:24:27.779 } 00:24:27.779 }, 00:24:27.779 { 00:24:27.779 "method": "bdev_nvme_set_options", 00:24:27.779 "params": { 00:24:27.779 "action_on_timeout": "none", 00:24:27.779 "timeout_us": 0, 00:24:27.779 "timeout_admin_us": 0, 00:24:27.779 "keep_alive_timeout_ms": 10000, 00:24:27.779 "arbitration_burst": 0, 00:24:27.779 "low_priority_weight": 0, 00:24:27.779 "medium_priority_weight": 0, 00:24:27.779 "high_priority_weight": 0, 00:24:27.779 "nvme_adminq_poll_period_us": 10000, 00:24:27.779 "nvme_ioq_poll_period_us": 0, 00:24:27.779 "io_queue_requests": 0, 00:24:27.779 "delay_cmd_submit": true, 00:24:27.779 "transport_retry_count": 4, 00:24:27.779 "bdev_retry_count": 3, 00:24:27.779 "transport_ack_timeout": 0, 00:24:27.779 "ctrlr_loss_timeout_sec": 0, 00:24:27.779 "reconnect_delay_sec": 0, 00:24:27.779 "fast_io_fail_timeout_sec": 0, 00:24:27.779 "disable_auto_failback": false, 00:24:27.779 "generate_uuids": false, 00:24:27.779 "transport_tos": 0, 00:24:27.779 "nvme_error_stat": false, 00:24:27.779 "rdma_srq_size": 0, 00:24:27.779 "io_path_stat": false, 00:24:27.779 "allow_accel_sequence": false, 00:24:27.779 "rdma_max_cq_size": 0, 00:24:27.779 "rdma_cm_event_timeout_ms": 0, 00:24:27.779 "dhchap_digests": [ 00:24:27.779 "sha256", 00:24:27.779 "sha384", 00:24:27.779 "sha512" 00:24:27.779 ], 00:24:27.779 "dhchap_dhgroups": [ 00:24:27.779 "null", 00:24:27.779 "ffdhe2048", 00:24:27.779 "ffdhe3072", 00:24:27.779 "ffdhe4096", 00:24:27.779 "ffdhe6144", 00:24:27.779 "ffdhe8192" 00:24:27.779 ] 00:24:27.779 } 00:24:27.779 }, 00:24:27.779 { 00:24:27.779 "method": "bdev_nvme_set_hotplug", 00:24:27.779 "params": { 00:24:27.779 "period_us": 100000, 00:24:27.779 "enable": false 00:24:27.779 } 00:24:27.779 }, 00:24:27.779 { 00:24:27.779 "method": "bdev_malloc_create", 00:24:27.779 "params": { 00:24:27.779 "name": "malloc0", 00:24:27.779 "num_blocks": 8192, 00:24:27.779 "block_size": 4096, 00:24:27.779 "physical_block_size": 4096, 00:24:27.779 "uuid": "ec532cfc-f77f-4f9f-ae91-8dad7e1b50e0", 00:24:27.779 "optimal_io_boundary": 0 00:24:27.779 } 00:24:27.779 }, 00:24:27.779 { 00:24:27.779 "method": "bdev_wait_for_examine" 00:24:27.779 } 00:24:27.779 ] 00:24:27.779 }, 00:24:27.779 { 00:24:27.779 "subsystem": "nbd", 00:24:27.779 "config": [] 00:24:27.779 }, 00:24:27.779 { 00:24:27.779 "subsystem": "scheduler", 00:24:27.779 "config": [ 00:24:27.779 { 00:24:27.779 "method": "framework_set_scheduler", 00:24:27.779 "params": { 00:24:27.779 "name": "static" 00:24:27.779 } 00:24:27.779 } 00:24:27.779 ] 00:24:27.779 }, 00:24:27.779 { 00:24:27.779 "subsystem": "nvmf", 00:24:27.779 "config": [ 00:24:27.779 { 00:24:27.779 "method": "nvmf_set_config", 00:24:27.779 "params": { 00:24:27.779 "discovery_filter": "match_any", 00:24:27.779 "admin_cmd_passthru": { 00:24:27.779 "identify_ctrlr": false 00:24:27.779 } 00:24:27.779 } 00:24:27.779 }, 00:24:27.779 { 00:24:27.779 "method": "nvmf_set_max_subsystems", 00:24:27.779 "params": { 00:24:27.779 "max_subsystems": 1024 00:24:27.779 } 00:24:27.779 }, 00:24:27.779 { 00:24:27.779 "method": "nvmf_set_crdt", 00:24:27.779 "params": { 00:24:27.779 "crdt1": 0, 00:24:27.779 "crdt2": 0, 00:24:27.779 "crdt3": 0 00:24:27.779 } 00:24:27.779 }, 00:24:27.779 { 00:24:27.779 "method": "nvmf_create_transport", 00:24:27.779 "params": { 00:24:27.779 "trtype": "TCP", 00:24:27.779 "max_queue_depth": 128, 00:24:27.779 "max_io_qpairs_per_ctrlr": 127, 00:24:27.779 "in_capsule_data_size": 4096, 00:24:27.779 "max_io_size": 131072, 00:24:27.779 "io_unit_size": 131072, 00:24:27.779 "max_aq_depth": 128, 00:24:27.779 "num_shared_buffers": 511, 00:24:27.779 "buf_cache_size": 4294967295, 00:24:27.779 "dif_insert_or_strip": false, 00:24:27.779 "zcopy": false, 00:24:27.779 "c2h_success": false, 00:24:27.779 "sock_priority": 0, 00:24:27.779 "abort_timeout_sec": 1, 00:24:27.779 "ack_timeout": 0, 00:24:27.779 "data_wr_pool_size": 0 00:24:27.779 } 00:24:27.779 }, 00:24:27.779 { 00:24:27.779 "method": "nvmf_create_subsystem", 00:24:27.779 "params": { 00:24:27.779 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.779 "allow_any_host": false, 00:24:27.779 "serial_number": "SPDK00000000000001", 00:24:27.779 "model_number": "SPDK bdev Controller", 00:24:27.779 "max_namespaces": 10, 00:24:27.779 "min_cntlid": 1, 00:24:27.779 "max_cntlid": 65519, 00:24:27.779 "ana_reporting": false 00:24:27.779 } 00:24:27.779 }, 00:24:27.779 { 00:24:27.779 "method": "nvmf_subsystem_add_host", 00:24:27.779 "params": { 00:24:27.779 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.779 "host": "nqn.2016-06.io.spdk:host1", 00:24:27.779 "psk": "/tmp/tmp.mdRZcftFe2" 00:24:27.779 } 00:24:27.779 }, 00:24:27.779 { 00:24:27.779 "method": "nvmf_subsystem_add_ns", 00:24:27.779 "params": { 00:24:27.779 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.779 "namespace": { 00:24:27.779 "nsid": 1, 00:24:27.779 "bdev_name": "malloc0", 00:24:27.779 "nguid": "EC532CFCF77F4F9FAE918DAD7E1B50E0", 00:24:27.779 "uuid": "ec532cfc-f77f-4f9f-ae91-8dad7e1b50e0", 00:24:27.779 "no_auto_visible": false 00:24:27.779 } 00:24:27.779 } 00:24:27.779 }, 00:24:27.779 { 00:24:27.779 "method": "nvmf_subsystem_add_listener", 00:24:27.779 "params": { 00:24:27.779 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.779 "listen_address": { 00:24:27.779 "trtype": "TCP", 00:24:27.779 "adrfam": "IPv4", 00:24:27.779 "traddr": "10.0.0.2", 00:24:27.779 "trsvcid": "4420" 00:24:27.779 }, 00:24:27.779 "secure_channel": true 00:24:27.779 } 00:24:27.779 } 00:24:27.779 ] 00:24:27.779 } 00:24:27.779 ] 00:24:27.779 }' 00:24:27.779 22:07:47 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:28.039 22:07:47 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:24:28.039 "subsystems": [ 00:24:28.039 { 00:24:28.039 "subsystem": "keyring", 00:24:28.039 "config": [] 00:24:28.039 }, 00:24:28.039 { 00:24:28.039 "subsystem": "iobuf", 00:24:28.039 "config": [ 00:24:28.039 { 00:24:28.039 "method": "iobuf_set_options", 00:24:28.039 "params": { 00:24:28.039 "small_pool_count": 8192, 00:24:28.039 "large_pool_count": 1024, 00:24:28.039 "small_bufsize": 8192, 00:24:28.039 "large_bufsize": 135168 00:24:28.039 } 00:24:28.039 } 00:24:28.039 ] 00:24:28.039 }, 00:24:28.039 { 00:24:28.039 "subsystem": "sock", 00:24:28.039 "config": [ 00:24:28.039 { 00:24:28.039 "method": "sock_set_default_impl", 00:24:28.039 "params": { 00:24:28.039 "impl_name": "posix" 00:24:28.039 } 00:24:28.039 }, 00:24:28.039 { 00:24:28.039 "method": "sock_impl_set_options", 00:24:28.039 "params": { 00:24:28.039 "impl_name": "ssl", 00:24:28.039 "recv_buf_size": 4096, 00:24:28.039 "send_buf_size": 4096, 00:24:28.039 "enable_recv_pipe": true, 00:24:28.039 "enable_quickack": false, 00:24:28.039 "enable_placement_id": 0, 00:24:28.039 "enable_zerocopy_send_server": true, 00:24:28.039 "enable_zerocopy_send_client": false, 00:24:28.039 "zerocopy_threshold": 0, 00:24:28.039 "tls_version": 0, 00:24:28.039 "enable_ktls": false 00:24:28.039 } 00:24:28.039 }, 00:24:28.039 { 00:24:28.039 "method": "sock_impl_set_options", 00:24:28.039 "params": { 00:24:28.039 "impl_name": "posix", 00:24:28.039 "recv_buf_size": 2097152, 00:24:28.039 "send_buf_size": 2097152, 00:24:28.039 "enable_recv_pipe": true, 00:24:28.039 "enable_quickack": false, 00:24:28.039 "enable_placement_id": 0, 00:24:28.039 "enable_zerocopy_send_server": true, 00:24:28.039 "enable_zerocopy_send_client": false, 00:24:28.039 "zerocopy_threshold": 0, 00:24:28.039 "tls_version": 0, 00:24:28.039 "enable_ktls": false 00:24:28.039 } 00:24:28.039 } 00:24:28.039 ] 00:24:28.039 }, 00:24:28.039 { 00:24:28.039 "subsystem": "vmd", 00:24:28.039 "config": [] 00:24:28.039 }, 00:24:28.039 { 00:24:28.039 "subsystem": "accel", 00:24:28.039 "config": [ 00:24:28.039 { 00:24:28.039 "method": "accel_set_options", 00:24:28.039 "params": { 00:24:28.039 "small_cache_size": 128, 00:24:28.039 "large_cache_size": 16, 00:24:28.039 "task_count": 2048, 00:24:28.039 "sequence_count": 2048, 00:24:28.039 "buf_count": 2048 00:24:28.039 } 00:24:28.039 } 00:24:28.039 ] 00:24:28.039 }, 00:24:28.039 { 00:24:28.039 "subsystem": "bdev", 00:24:28.040 "config": [ 00:24:28.040 { 00:24:28.040 "method": "bdev_set_options", 00:24:28.040 "params": { 00:24:28.040 "bdev_io_pool_size": 65535, 00:24:28.040 "bdev_io_cache_size": 256, 00:24:28.040 "bdev_auto_examine": true, 00:24:28.040 "iobuf_small_cache_size": 128, 00:24:28.040 "iobuf_large_cache_size": 16 00:24:28.040 } 00:24:28.040 }, 00:24:28.040 { 00:24:28.040 "method": "bdev_raid_set_options", 00:24:28.040 "params": { 00:24:28.040 "process_window_size_kb": 1024 00:24:28.040 } 00:24:28.040 }, 00:24:28.040 { 00:24:28.040 "method": "bdev_iscsi_set_options", 00:24:28.040 "params": { 00:24:28.040 "timeout_sec": 30 00:24:28.040 } 00:24:28.040 }, 00:24:28.040 { 00:24:28.040 "method": "bdev_nvme_set_options", 00:24:28.040 "params": { 00:24:28.040 "action_on_timeout": "none", 00:24:28.040 "timeout_us": 0, 00:24:28.040 "timeout_admin_us": 0, 00:24:28.040 "keep_alive_timeout_ms": 10000, 00:24:28.040 "arbitration_burst": 0, 00:24:28.040 "low_priority_weight": 0, 00:24:28.040 "medium_priority_weight": 0, 00:24:28.040 "high_priority_weight": 0, 00:24:28.040 "nvme_adminq_poll_period_us": 10000, 00:24:28.040 "nvme_ioq_poll_period_us": 0, 00:24:28.040 "io_queue_requests": 512, 00:24:28.040 "delay_cmd_submit": true, 00:24:28.040 "transport_retry_count": 4, 00:24:28.040 "bdev_retry_count": 3, 00:24:28.040 "transport_ack_timeout": 0, 00:24:28.040 "ctrlr_loss_timeout_sec": 0, 00:24:28.040 "reconnect_delay_sec": 0, 00:24:28.040 "fast_io_fail_timeout_sec": 0, 00:24:28.040 "disable_auto_failback": false, 00:24:28.040 "generate_uuids": false, 00:24:28.040 "transport_tos": 0, 00:24:28.040 "nvme_error_stat": false, 00:24:28.040 "rdma_srq_size": 0, 00:24:28.040 "io_path_stat": false, 00:24:28.040 "allow_accel_sequence": false, 00:24:28.040 "rdma_max_cq_size": 0, 00:24:28.040 "rdma_cm_event_timeout_ms": 0, 00:24:28.040 "dhchap_digests": [ 00:24:28.040 "sha256", 00:24:28.040 "sha384", 00:24:28.040 "sha512" 00:24:28.040 ], 00:24:28.040 "dhchap_dhgroups": [ 00:24:28.040 "null", 00:24:28.040 "ffdhe2048", 00:24:28.040 "ffdhe3072", 00:24:28.040 "ffdhe4096", 00:24:28.040 "ffdhe6144", 00:24:28.040 "ffdhe8192" 00:24:28.040 ] 00:24:28.040 } 00:24:28.040 }, 00:24:28.040 { 00:24:28.040 "method": "bdev_nvme_attach_controller", 00:24:28.040 "params": { 00:24:28.040 "name": "TLSTEST", 00:24:28.040 "trtype": "TCP", 00:24:28.040 "adrfam": "IPv4", 00:24:28.040 "traddr": "10.0.0.2", 00:24:28.040 "trsvcid": "4420", 00:24:28.040 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.040 "prchk_reftag": false, 00:24:28.040 "prchk_guard": false, 00:24:28.040 "ctrlr_loss_timeout_sec": 0, 00:24:28.040 "reconnect_delay_sec": 0, 00:24:28.040 "fast_io_fail_timeout_sec": 0, 00:24:28.040 "psk": "/tmp/tmp.mdRZcftFe2", 00:24:28.040 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:28.040 "hdgst": false, 00:24:28.040 "ddgst": false 00:24:28.040 } 00:24:28.040 }, 00:24:28.040 { 00:24:28.040 "method": "bdev_nvme_set_hotplug", 00:24:28.040 "params": { 00:24:28.040 "period_us": 100000, 00:24:28.040 "enable": false 00:24:28.040 } 00:24:28.040 }, 00:24:28.040 { 00:24:28.040 "method": "bdev_wait_for_examine" 00:24:28.040 } 00:24:28.040 ] 00:24:28.040 }, 00:24:28.040 { 00:24:28.040 "subsystem": "nbd", 00:24:28.040 "config": [] 00:24:28.040 } 00:24:28.040 ] 00:24:28.040 }' 00:24:28.040 22:07:47 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 4119379 00:24:28.040 22:07:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4119379 ']' 00:24:28.040 22:07:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4119379 00:24:28.040 22:07:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:28.040 22:07:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:28.040 22:07:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4119379 00:24:28.040 22:07:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:28.040 22:07:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:28.040 22:07:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4119379' 00:24:28.040 killing process with pid 4119379 00:24:28.040 22:07:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4119379 00:24:28.040 Received shutdown signal, test time was about 10.000000 seconds 00:24:28.040 00:24:28.040 Latency(us) 00:24:28.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.040 =================================================================================================================== 00:24:28.040 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:28.040 [2024-07-13 22:07:47.428954] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' 22:07:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4119379 00:24:28.040 scheduled for removal in v24.09 hit 1 times 00:24:28.978 22:07:48 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 4118971 00:24:28.978 22:07:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4118971 ']' 00:24:28.978 22:07:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4118971 00:24:29.237 22:07:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:29.237 22:07:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:29.237 22:07:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4118971 00:24:29.237 22:07:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:29.237 22:07:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:29.237 22:07:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4118971' 00:24:29.237 killing process with pid 4118971 00:24:29.237 22:07:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4118971 00:24:29.237 [2024-07-13 22:07:48.403673] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:29.237 22:07:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4118971 00:24:30.612 22:07:49 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:30.612 22:07:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:30.612 22:07:49 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:24:30.612 "subsystems": [ 00:24:30.612 { 00:24:30.612 "subsystem": "keyring", 00:24:30.612 "config": [] 00:24:30.612 }, 00:24:30.612 { 00:24:30.612 "subsystem": "iobuf", 00:24:30.612 "config": [ 00:24:30.612 { 00:24:30.612 "method": "iobuf_set_options", 00:24:30.612 "params": { 00:24:30.612 "small_pool_count": 8192, 00:24:30.612 "large_pool_count": 1024, 00:24:30.612 "small_bufsize": 8192, 00:24:30.612 "large_bufsize": 135168 00:24:30.612 } 00:24:30.612 } 00:24:30.612 ] 00:24:30.612 }, 00:24:30.612 { 00:24:30.612 "subsystem": "sock", 00:24:30.612 "config": [ 00:24:30.612 { 00:24:30.612 "method": "sock_set_default_impl", 00:24:30.612 "params": { 00:24:30.612 "impl_name": "posix" 00:24:30.612 } 00:24:30.612 }, 00:24:30.612 { 00:24:30.612 "method": "sock_impl_set_options", 00:24:30.612 "params": { 00:24:30.612 "impl_name": "ssl", 00:24:30.612 "recv_buf_size": 4096, 00:24:30.612 "send_buf_size": 4096, 00:24:30.612 "enable_recv_pipe": true, 00:24:30.612 "enable_quickack": false, 00:24:30.612 "enable_placement_id": 0, 00:24:30.612 "enable_zerocopy_send_server": true, 00:24:30.612 "enable_zerocopy_send_client": false, 00:24:30.612 "zerocopy_threshold": 0, 00:24:30.612 "tls_version": 0, 00:24:30.612 "enable_ktls": false 00:24:30.612 } 00:24:30.612 }, 00:24:30.612 { 00:24:30.612 "method": "sock_impl_set_options", 00:24:30.612 "params": { 00:24:30.612 "impl_name": "posix", 00:24:30.612 "recv_buf_size": 2097152, 00:24:30.612 "send_buf_size": 2097152, 00:24:30.612 "enable_recv_pipe": true, 00:24:30.612 "enable_quickack": false, 00:24:30.612 "enable_placement_id": 0, 00:24:30.612 "enable_zerocopy_send_server": true, 00:24:30.612 "enable_zerocopy_send_client": false, 00:24:30.612 "zerocopy_threshold": 0, 00:24:30.612 "tls_version": 0, 00:24:30.612 "enable_ktls": false 00:24:30.612 } 00:24:30.612 } 00:24:30.612 ] 00:24:30.612 }, 00:24:30.612 { 00:24:30.612 "subsystem": "vmd", 00:24:30.612 "config": [] 00:24:30.612 }, 00:24:30.612 { 00:24:30.612 "subsystem": "accel", 00:24:30.612 "config": [ 00:24:30.612 { 00:24:30.612 "method": "accel_set_options", 00:24:30.612 "params": { 00:24:30.612 "small_cache_size": 128, 00:24:30.612 "large_cache_size": 16, 00:24:30.612 "task_count": 2048, 00:24:30.612 "sequence_count": 2048, 00:24:30.612 "buf_count": 2048 00:24:30.612 } 00:24:30.612 } 00:24:30.612 ] 00:24:30.612 }, 00:24:30.612 { 00:24:30.612 "subsystem": "bdev", 00:24:30.612 "config": [ 00:24:30.612 { 00:24:30.612 "method": "bdev_set_options", 00:24:30.612 "params": { 00:24:30.612 "bdev_io_pool_size": 65535, 00:24:30.612 "bdev_io_cache_size": 256, 00:24:30.612 "bdev_auto_examine": true, 00:24:30.612 "iobuf_small_cache_size": 128, 00:24:30.612 "iobuf_large_cache_size": 16 00:24:30.612 } 00:24:30.612 }, 00:24:30.612 { 00:24:30.612 "method": "bdev_raid_set_options", 00:24:30.612 "params": { 00:24:30.612 "process_window_size_kb": 1024 00:24:30.612 } 00:24:30.612 }, 00:24:30.612 { 00:24:30.612 "method": "bdev_iscsi_set_options", 00:24:30.612 "params": { 00:24:30.612 "timeout_sec": 30 00:24:30.612 } 00:24:30.612 }, 00:24:30.612 { 00:24:30.612 "method": "bdev_nvme_set_options", 00:24:30.612 "params": { 00:24:30.612 "action_on_timeout": "none", 00:24:30.612 "timeout_us": 0, 00:24:30.612 "timeout_admin_us": 0, 00:24:30.612 "keep_alive_timeout_ms": 10000, 00:24:30.612 "arbitration_burst": 0, 00:24:30.612 "low_priority_weight": 0, 00:24:30.612 "medium_priority_weight": 0, 00:24:30.612 "high_priority_weight": 0, 00:24:30.612 "nvme_adminq_poll_period_us": 10000, 00:24:30.612 "nvme_ioq_poll_period_us": 0, 00:24:30.612 "io_queue_requests": 0, 00:24:30.612 "delay_cmd_submit": true, 00:24:30.612 "transport_retry_count": 4, 00:24:30.612 "bdev_retry_count": 3, 00:24:30.612 "transport_ack_timeout": 0, 00:24:30.612 "ctrlr_loss_timeout_sec": 0, 00:24:30.612 "reconnect_delay_sec": 0, 00:24:30.612 "fast_io_fail_timeout_sec": 0, 00:24:30.612 "disable_auto_failback": false, 00:24:30.612 "generate_uuids": false, 00:24:30.612 "transport_tos": 0, 00:24:30.612 "nvme_error_stat": false, 00:24:30.612 "rdma_srq_size": 0, 00:24:30.612 "io_path_stat": false, 00:24:30.612 "allow_accel_sequence": false, 00:24:30.612 "rdma_max_cq_size": 0, 00:24:30.612 "rdma_cm_event_timeout_ms": 0, 00:24:30.612 "dhchap_digests": [ 00:24:30.612 "sha256", 00:24:30.612 "sha384", 00:24:30.612 "sha512" 00:24:30.612 ], 00:24:30.612 "dhchap_dhgroups": [ 00:24:30.612 "null", 00:24:30.612 "ffdhe2048", 00:24:30.612 "ffdhe3072", 00:24:30.612 "ffdhe4096", 00:24:30.612 "ffdhe6144", 00:24:30.612 "ffdhe8192" 00:24:30.612 ] 00:24:30.612 } 00:24:30.612 }, 00:24:30.612 { 00:24:30.612 "method": "bdev_nvme_set_hotplug", 00:24:30.612 "params": { 00:24:30.612 "period_us": 100000, 00:24:30.612 "enable": false 00:24:30.612 } 00:24:30.612 }, 00:24:30.612 { 00:24:30.612 "method": "bdev_malloc_create", 00:24:30.612 "params": { 00:24:30.612 "name": "malloc0", 00:24:30.612 "num_blocks": 8192, 00:24:30.612 "block_size": 4096, 00:24:30.612 "physical_block_size": 4096, 00:24:30.612 "uuid": "ec532cfc-f77f-4f9f-ae91-8dad7e1b50e0", 00:24:30.613 "optimal_io_boundary": 0 00:24:30.613 } 00:24:30.613 }, 00:24:30.613 { 00:24:30.613 "method": "bdev_wait_for_examine" 00:24:30.613 } 00:24:30.613 ] 00:24:30.613 }, 00:24:30.613 { 00:24:30.613 "subsystem": "nbd", 00:24:30.613 "config": [] 00:24:30.613 }, 00:24:30.613 { 00:24:30.613 "subsystem": "scheduler", 00:24:30.613 "config": [ 00:24:30.613 { 00:24:30.613 "method": "framework_set_scheduler", 00:24:30.613 "params": { 00:24:30.613 "name": "static" 00:24:30.613 } 00:24:30.613 } 00:24:30.613 ] 00:24:30.613 }, 00:24:30.613 { 00:24:30.613 "subsystem": "nvmf", 00:24:30.613 "config": [ 00:24:30.613 { 00:24:30.613 "method": "nvmf_set_config", 00:24:30.613 "params": { 00:24:30.613 "discovery_filter": "match_any", 00:24:30.613 "admin_cmd_passthru": { 00:24:30.613 "identify_ctrlr": false 00:24:30.613 } 00:24:30.613 } 00:24:30.613 }, 00:24:30.613 { 00:24:30.613 "method": "nvmf_set_max_subsystems", 00:24:30.613 "params": { 00:24:30.613 "max_subsystems": 1024 00:24:30.613 } 00:24:30.613 }, 00:24:30.613 { 00:24:30.613 "method": "nvmf_set_crdt", 00:24:30.613 "params": { 00:24:30.613 "crdt1": 0, 00:24:30.613 "crdt2": 0, 00:24:30.613 "crdt3": 0 00:24:30.613 } 00:24:30.613 }, 00:24:30.613 { 00:24:30.613 "method": "nvmf_create_transport", 00:24:30.613 "params": { 00:24:30.613 "trtype": "TCP", 00:24:30.613 "max_queue_depth": 128, 00:24:30.613 "max_io_qpairs_per_ctrlr": 127, 00:24:30.613 "in_capsule_data_size": 4096, 00:24:30.613 "max_io_size": 131072, 00:24:30.613 "io_unit_size": 131072, 00:24:30.613 "max_aq_depth": 128, 00:24:30.613 "num_shared_buffers": 511, 00:24:30.613 "buf_cache_size": 4294967295, 00:24:30.613 "dif_insert_or_strip": false, 00:24:30.613 "zcopy": false, 00:24:30.613 "c2h_success": false, 00:24:30.613 "sock_priority": 0, 00:24:30.613 "abort_timeout_sec": 1, 00:24:30.613 "ack_timeout": 0, 00:24:30.613 "data_wr_pool_size": 0 00:24:30.613 } 00:24:30.613 }, 00:24:30.613 { 00:24:30.613 "method": "nvmf_create_subsystem", 00:24:30.613 "params": { 00:24:30.613 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:30.613 "allow_any_host": false, 00:24:30.613 "serial_number": "SPDK00000000000001", 00:24:30.613 "model_number": "SPDK bdev Controller", 00:24:30.613 "max_namespaces": 10, 00:24:30.613 "min_cntlid": 1, 00:24:30.613 "max_cntlid": 65519, 00:24:30.613 "ana_reporting": false 00:24:30.613 } 00:24:30.613 }, 00:24:30.613 { 00:24:30.613 "method": "nvmf_subsystem_add_host", 00:24:30.613 "params": { 00:24:30.613 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:30.613 "host": "nqn.2016-06.io.spdk:host1", 00:24:30.613 "psk": "/tmp/tmp.mdRZcftFe2" 00:24:30.613 } 00:24:30.613 }, 00:24:30.613 { 00:24:30.613 "method": "nvmf_subsystem_add_ns", 00:24:30.613 "params": { 00:24:30.613 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:30.613 "namespace": { 00:24:30.613 "nsid": 1, 00:24:30.613 "bdev_name": "malloc0", 00:24:30.613 "nguid": "EC532CFCF77F4F9FAE918DAD7E1B50E0", 00:24:30.613 "uuid": "ec532cfc-f77f-4f9f-ae91-8dad7e1b50e0", 00:24:30.613 "no_auto_visible": false 00:24:30.613 } 00:24:30.613 } 00:24:30.613 }, 00:24:30.613 { 00:24:30.613 "method": "nvmf_subsystem_add_listener", 00:24:30.613 "params": { 00:24:30.613 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:30.613 "listen_address": { 00:24:30.613 "trtype": "TCP", 00:24:30.613 "adrfam": "IPv4", 00:24:30.613 "traddr": "10.0.0.2", 00:24:30.613 "trsvcid": "4420" 00:24:30.613 }, 00:24:30.613 "secure_channel": true 00:24:30.613 } 00:24:30.613 } 00:24:30.613 ] 00:24:30.613 } 00:24:30.613 ] 00:24:30.613 }' 00:24:30.613 22:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:30.613 22:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.613 22:07:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4119807 00:24:30.613 22:07:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:30.613 22:07:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4119807 00:24:30.613 22:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4119807 ']' 00:24:30.613 22:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.613 22:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:30.613 22:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.613 22:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:30.613 22:07:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:30.613 [2024-07-13 22:07:49.945321] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:30.613 [2024-07-13 22:07:49.945468] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.872 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.872 [2024-07-13 22:07:50.095729] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.132 [2024-07-13 22:07:50.356504] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.132 [2024-07-13 22:07:50.356586] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.132 [2024-07-13 22:07:50.356617] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.132 [2024-07-13 22:07:50.356642] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.132 [2024-07-13 22:07:50.356665] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.132 [2024-07-13 22:07:50.356824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.698 [2024-07-13 22:07:50.906984] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.698 [2024-07-13 22:07:50.922951] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:31.698 [2024-07-13 22:07:50.938987] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:31.698 [2024-07-13 22:07:50.939276] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.698 22:07:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:31.698 22:07:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:31.698 22:07:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:31.698 22:07:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:31.698 22:07:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.698 22:07:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.698 22:07:50 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=4119961 00:24:31.698 22:07:50 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 4119961 /var/tmp/bdevperf.sock 00:24:31.698 22:07:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4119961 ']' 00:24:31.698 22:07:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:31.698 22:07:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:31.698 22:07:50 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:31.698 22:07:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:31.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:31.698 22:07:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:31.698 22:07:50 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:24:31.698 "subsystems": [ 00:24:31.698 { 00:24:31.698 "subsystem": "keyring", 00:24:31.698 "config": [] 00:24:31.698 }, 00:24:31.698 { 00:24:31.698 "subsystem": "iobuf", 00:24:31.698 "config": [ 00:24:31.698 { 00:24:31.698 "method": "iobuf_set_options", 00:24:31.698 "params": { 00:24:31.698 "small_pool_count": 8192, 00:24:31.698 "large_pool_count": 1024, 00:24:31.698 "small_bufsize": 8192, 00:24:31.698 "large_bufsize": 135168 00:24:31.699 } 00:24:31.699 } 00:24:31.699 ] 00:24:31.699 }, 00:24:31.699 { 00:24:31.699 "subsystem": "sock", 00:24:31.699 "config": [ 00:24:31.699 { 00:24:31.699 "method": "sock_set_default_impl", 00:24:31.699 "params": { 00:24:31.699 "impl_name": "posix" 00:24:31.699 } 00:24:31.699 }, 00:24:31.699 { 00:24:31.699 "method": "sock_impl_set_options", 00:24:31.699 "params": { 00:24:31.699 "impl_name": "ssl", 00:24:31.699 "recv_buf_size": 4096, 00:24:31.699 "send_buf_size": 4096, 00:24:31.699 "enable_recv_pipe": true, 00:24:31.699 "enable_quickack": false, 00:24:31.699 "enable_placement_id": 0, 00:24:31.699 "enable_zerocopy_send_server": true, 00:24:31.699 "enable_zerocopy_send_client": false, 00:24:31.699 "zerocopy_threshold": 0, 00:24:31.699 "tls_version": 0, 00:24:31.699 "enable_ktls": false 00:24:31.699 } 00:24:31.699 }, 00:24:31.699 { 00:24:31.699 "method": "sock_impl_set_options", 00:24:31.699 "params": { 00:24:31.699 "impl_name": "posix", 00:24:31.699 "recv_buf_size": 2097152, 00:24:31.699 "send_buf_size": 2097152, 00:24:31.699 "enable_recv_pipe": true, 00:24:31.699 "enable_quickack": false, 00:24:31.699 "enable_placement_id": 0, 00:24:31.699 "enable_zerocopy_send_server": true, 00:24:31.699 "enable_zerocopy_send_client": false, 00:24:31.699 "zerocopy_threshold": 0, 00:24:31.699 "tls_version": 0, 00:24:31.699 "enable_ktls": false 00:24:31.699 } 00:24:31.699 } 00:24:31.699 ] 00:24:31.699 }, 00:24:31.699 { 00:24:31.699 "subsystem": "vmd", 00:24:31.699 "config": [] 00:24:31.699 }, 00:24:31.699 { 00:24:31.699 "subsystem": "accel", 00:24:31.699 "config": [ 00:24:31.699 { 00:24:31.699 "method": "accel_set_options", 00:24:31.699 "params": { 00:24:31.699 "small_cache_size": 128, 00:24:31.699 "large_cache_size": 16, 00:24:31.699 "task_count": 2048, 00:24:31.699 "sequence_count": 2048, 00:24:31.699 "buf_count": 2048 00:24:31.699 } 00:24:31.699 } 00:24:31.699 ] 00:24:31.699 }, 00:24:31.699 { 00:24:31.699 "subsystem": "bdev", 00:24:31.699 "config": [ 00:24:31.699 { 00:24:31.699 "method": "bdev_set_options", 00:24:31.699 "params": { 00:24:31.699 "bdev_io_pool_size": 65535, 00:24:31.699 "bdev_io_cache_size": 256, 00:24:31.699 "bdev_auto_examine": true, 00:24:31.699 "iobuf_small_cache_size": 128, 00:24:31.699 "iobuf_large_cache_size": 16 00:24:31.699 } 00:24:31.699 }, 00:24:31.699 { 00:24:31.699 "method": "bdev_raid_set_options", 00:24:31.699 "params": { 00:24:31.699 "process_window_size_kb": 1024 00:24:31.699 } 00:24:31.699 }, 00:24:31.699 { 00:24:31.699 "method": "bdev_iscsi_set_options", 00:24:31.699 "params": { 00:24:31.699 "timeout_sec": 30 00:24:31.699 } 00:24:31.699 }, 00:24:31.699 { 00:24:31.699 "method": "bdev_nvme_set_options", 00:24:31.699 "params": { 00:24:31.699 "action_on_timeout": "none", 00:24:31.699 "timeout_us": 0, 00:24:31.699 "timeout_admin_us": 0, 00:24:31.699 "keep_alive_timeout_ms": 10000, 00:24:31.699 "arbitration_burst": 0, 00:24:31.699 "low_priority_weight": 0, 00:24:31.699 "medium_priority_weight": 0, 00:24:31.699 "high_priority_weight": 0, 00:24:31.699 "nvme_adminq_poll_period_us": 10000, 00:24:31.699 "nvme_ioq_poll_period_us": 0, 00:24:31.699 "io_queue_requests": 512, 00:24:31.699 "delay_cmd_submit": true, 00:24:31.699 "transport_retry_count": 4, 00:24:31.699 "bdev_retry_count": 3, 00:24:31.699 "transport_ack_timeout": 0, 00:24:31.699 "ctrlr_loss_timeout_sec": 0, 00:24:31.699 "reconnect_delay_sec": 0, 00:24:31.699 "fast_io_fail_timeout_sec": 0, 00:24:31.699 "disable_auto_failback": false, 00:24:31.699 "generate_uuids": false, 00:24:31.699 "transport_tos": 0, 00:24:31.699 "nvme_error_stat": false, 00:24:31.699 "rdma_srq_size": 0, 00:24:31.699 "io_path_stat": false, 00:24:31.699 "allow_accel_sequence": false, 00:24:31.699 "rdma_max_cq_size": 0, 00:24:31.699 "rdma_cm_event_timeout_ms": 0, 00:24:31.699 "dhchap_digests": [ 00:24:31.699 "sha256", 00:24:31.699 "sha384", 00:24:31.699 "sha512" 00:24:31.699 ], 00:24:31.699 "dhchap_dhgroups": [ 00:24:31.699 "null", 00:24:31.699 "ffdhe2048", 00:24:31.699 "ffdhe3072", 00:24:31.699 "ffdhe4096", 00:24:31.699 "ffdhe6144", 00:24:31.699 "ffdhe8192" 00:24:31.699 ] 00:24:31.699 } 00:24:31.699 }, 00:24:31.699 { 00:24:31.699 "method": "bdev_nvme_attach_controller", 00:24:31.699 "params": { 00:24:31.699 "name": "TLSTEST", 00:24:31.699 "trtype": "TCP", 00:24:31.699 "adrfam": "IPv4", 00:24:31.699 "traddr": "10.0.0.2", 00:24:31.699 "trsvcid": "4420", 00:24:31.699 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:31.699 "prchk_reftag": false, 00:24:31.699 "prchk_guard": false, 00:24:31.699 "ctrlr_loss_timeout_sec": 0, 00:24:31.699 "reconnect_delay_sec": 0, 00:24:31.699 "fast_io_fail_timeout_sec": 0, 00:24:31.699 "psk": "/tmp/tmp.mdRZcftFe2", 00:24:31.699 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:31.699 "hdgst": false, 00:24:31.699 "ddgst": false 00:24:31.699 } 00:24:31.699 }, 00:24:31.699 { 00:24:31.699 "method": "bdev_nvme_set_hotplug", 00:24:31.699 "params": { 00:24:31.699 "period_us": 100000, 00:24:31.699 "enable": false 00:24:31.699 } 00:24:31.699 }, 00:24:31.699 { 00:24:31.699 "method": "bdev_wait_for_examine" 00:24:31.699 } 00:24:31.699 ] 00:24:31.699 }, 00:24:31.699 { 00:24:31.699 "subsystem": "nbd", 00:24:31.699 "config": [] 00:24:31.699 } 00:24:31.699 ] 00:24:31.699 }' 00:24:31.699 22:07:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.699 [2024-07-13 22:07:51.085923] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:31.699 [2024-07-13 22:07:51.086069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4119961 ] 00:24:31.958 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.958 [2024-07-13 22:07:51.209658] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.217 [2024-07-13 22:07:51.435236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.475 [2024-07-13 22:07:51.819820] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:32.475 [2024-07-13 22:07:51.820020] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:32.734 22:07:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:32.734 22:07:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:32.734 22:07:52 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:32.734 Running I/O for 10 seconds... 00:24:44.951 00:24:44.951 Latency(us) 00:24:44.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.951 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:44.951 Verification LBA range: start 0x0 length 0x2000 00:24:44.951 TLSTESTn1 : 10.04 2163.73 8.45 0.00 0.00 59014.92 12621.75 90099.86 00:24:44.951 =================================================================================================================== 00:24:44.951 Total : 2163.73 8.45 0.00 0.00 59014.92 12621.75 90099.86 00:24:44.951 0 00:24:44.951 22:08:02 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:44.951 22:08:02 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 4119961 00:24:44.951 22:08:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4119961 ']' 00:24:44.951 22:08:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4119961 00:24:44.951 22:08:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:44.951 22:08:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:44.951 22:08:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4119961 00:24:44.951 22:08:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:44.951 22:08:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:44.951 22:08:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4119961' 00:24:44.951 killing process with pid 4119961 00:24:44.951 22:08:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4119961 00:24:44.951 Received shutdown signal, test time was about 10.000000 seconds 00:24:44.951 00:24:44.951 Latency(us) 00:24:44.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.951 =================================================================================================================== 00:24:44.951 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:44.951 [2024-07-13 22:08:02.217573] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' 22:08:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4119961 00:24:44.951 scheduled for removal in v24.09 hit 1 times 00:24:44.951 22:08:03 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 4119807 00:24:44.951 22:08:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4119807 ']' 00:24:44.951 22:08:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4119807 00:24:44.951 22:08:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:44.951 22:08:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:44.951 22:08:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4119807 00:24:44.951 22:08:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:44.951 22:08:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:44.952 22:08:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4119807' 00:24:44.952 killing process with pid 4119807 00:24:44.952 22:08:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4119807 00:24:44.952 [2024-07-13 22:08:03.213197] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:44.952 22:08:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4119807 00:24:45.211 22:08:04 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:24:45.211 22:08:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:45.211 22:08:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:45.211 22:08:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:45.211 22:08:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4121543 00:24:45.211 22:08:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:45.211 22:08:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4121543 00:24:45.211 22:08:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4121543 ']' 00:24:45.211 22:08:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.211 22:08:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:45.211 22:08:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.211 22:08:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:45.211 22:08:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:45.470 [2024-07-13 22:08:04.653542] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:45.470 [2024-07-13 22:08:04.653692] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.470 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.470 [2024-07-13 22:08:04.788997] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.731 [2024-07-13 22:08:05.043907] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.731 [2024-07-13 22:08:05.043983] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.731 [2024-07-13 22:08:05.044013] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.731 [2024-07-13 22:08:05.044038] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.731 [2024-07-13 22:08:05.044060] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.731 [2024-07-13 22:08:05.044109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.300 22:08:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:46.300 22:08:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:46.300 22:08:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:46.300 22:08:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:46.300 22:08:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:46.300 22:08:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.300 22:08:05 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.mdRZcftFe2 00:24:46.300 22:08:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.mdRZcftFe2 00:24:46.300 22:08:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:46.559 [2024-07-13 22:08:05.872273] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.559 22:08:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:46.817 22:08:06 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:47.076 [2024-07-13 22:08:06.461897] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:47.076 [2024-07-13 22:08:06.462208] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.333 22:08:06 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:47.591 malloc0 00:24:47.591 22:08:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:47.850 22:08:07 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mdRZcftFe2 00:24:48.108 [2024-07-13 22:08:07.304619] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:48.108 22:08:07 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=4121946 00:24:48.108 22:08:07 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:48.108 22:08:07 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:48.108 22:08:07 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 4121946 /var/tmp/bdevperf.sock 00:24:48.108 22:08:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4121946 ']' 00:24:48.108 22:08:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:48.108 22:08:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:48.108 22:08:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:48.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:48.108 22:08:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:48.108 22:08:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.109 [2024-07-13 22:08:07.395480] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:48.109 [2024-07-13 22:08:07.395631] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4121946 ] 00:24:48.109 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.367 [2024-07-13 22:08:07.526644] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.626 [2024-07-13 22:08:07.780082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.192 22:08:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:49.192 22:08:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:49.192 22:08:08 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.mdRZcftFe2 00:24:49.450 22:08:08 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:49.707 [2024-07-13 22:08:08.966845] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:49.707 nvme0n1 00:24:49.707 22:08:09 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:49.965 Running I/O for 1 seconds... 00:24:50.903 00:24:50.903 Latency(us) 00:24:50.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.903 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:50.903 Verification LBA range: start 0x0 length 0x2000 00:24:50.903 nvme0n1 : 1.05 2029.43 7.93 0.00 0.00 61597.88 10291.58 100973.99 00:24:50.903 =================================================================================================================== 00:24:50.903 Total : 2029.43 7.93 0.00 0.00 61597.88 10291.58 100973.99 00:24:50.903 0 00:24:50.903 22:08:10 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 4121946 00:24:50.903 22:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4121946 ']' 00:24:50.903 22:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4121946 00:24:50.903 22:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:50.903 22:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:50.903 22:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4121946 00:24:50.903 22:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:50.903 22:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:50.903 22:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4121946' 00:24:50.903 killing process with pid 4121946 00:24:50.903 22:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4121946 00:24:50.903 Received shutdown signal, test time was about 1.000000 seconds 00:24:50.903 00:24:50.903 Latency(us) 00:24:50.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.903 =================================================================================================================== 00:24:50.903 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:50.903 22:08:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4121946 00:24:52.283 22:08:11 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 4121543 00:24:52.283 22:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4121543 ']' 00:24:52.283 22:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4121543 00:24:52.283 22:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:52.283 22:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:52.283 22:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4121543 00:24:52.283 22:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:52.283 22:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:52.283 22:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4121543' 00:24:52.283 killing process with pid 4121543 00:24:52.283 22:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4121543 00:24:52.283 [2024-07-13 22:08:11.342074] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:52.283 22:08:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4121543 00:24:53.659 22:08:12 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:24:53.660 22:08:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:53.660 22:08:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:53.660 22:08:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:53.660 22:08:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4122552 00:24:53.660 22:08:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:53.660 22:08:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4122552 00:24:53.660 22:08:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4122552 ']' 00:24:53.660 22:08:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.660 22:08:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:53.660 22:08:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.660 22:08:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:53.660 22:08:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:53.660 [2024-07-13 22:08:12.851191] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:53.660 [2024-07-13 22:08:12.851346] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:53.660 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.660 [2024-07-13 22:08:12.984328] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.920 [2024-07-13 22:08:13.237345] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:53.920 [2024-07-13 22:08:13.237429] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:53.920 [2024-07-13 22:08:13.237459] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:53.920 [2024-07-13 22:08:13.237485] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:53.920 [2024-07-13 22:08:13.237506] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:53.920 [2024-07-13 22:08:13.237577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.487 22:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:54.487 22:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:54.487 22:08:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:54.487 22:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:54.487 22:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:54.487 22:08:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:54.487 22:08:13 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:24:54.487 22:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.487 22:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:54.487 [2024-07-13 22:08:13.791636] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.487 malloc0 00:24:54.487 [2024-07-13 22:08:13.859131] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:54.487 [2024-07-13 22:08:13.859436] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.745 22:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.745 22:08:13 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=4122673 00:24:54.745 22:08:13 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 4122673 /var/tmp/bdevperf.sock 00:24:54.745 22:08:13 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:54.745 22:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4122673 ']' 00:24:54.745 22:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:54.745 22:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:54.745 22:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:54.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:54.745 22:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:54.745 22:08:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:54.745 [2024-07-13 22:08:13.977492] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:54.745 [2024-07-13 22:08:13.977625] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4122673 ] 00:24:54.745 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.745 [2024-07-13 22:08:14.103330] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.005 [2024-07-13 22:08:14.335126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.572 22:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:55.572 22:08:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:24:55.572 22:08:14 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.mdRZcftFe2 00:24:55.830 22:08:15 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:56.089 [2024-07-13 22:08:15.470978] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:56.348 nvme0n1 00:24:56.348 22:08:15 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:56.348 Running I/O for 1 seconds... 00:24:57.725 00:24:57.725 Latency(us) 00:24:57.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.725 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:57.725 Verification LBA range: start 0x0 length 0x2000 00:24:57.725 nvme0n1 : 1.05 2104.87 8.22 0.00 0.00 59364.69 11747.93 89711.50 00:24:57.725 =================================================================================================================== 00:24:57.725 Total : 2104.87 8.22 0.00 0.00 59364.69 11747.93 89711.50 00:24:57.725 0 00:24:57.725 22:08:16 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:24:57.725 22:08:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.725 22:08:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:57.725 22:08:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.725 22:08:16 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:24:57.725 "subsystems": [ 00:24:57.725 { 00:24:57.725 "subsystem": "keyring", 00:24:57.725 "config": [ 00:24:57.725 { 00:24:57.725 "method": "keyring_file_add_key", 00:24:57.725 "params": { 00:24:57.725 "name": "key0", 00:24:57.725 "path": "/tmp/tmp.mdRZcftFe2" 00:24:57.725 } 00:24:57.725 } 00:24:57.725 ] 00:24:57.725 }, 00:24:57.725 { 00:24:57.725 "subsystem": "iobuf", 00:24:57.725 "config": [ 00:24:57.725 { 00:24:57.725 "method": "iobuf_set_options", 00:24:57.725 "params": { 00:24:57.725 "small_pool_count": 8192, 00:24:57.725 "large_pool_count": 1024, 00:24:57.725 "small_bufsize": 8192, 00:24:57.725 "large_bufsize": 135168 00:24:57.725 } 00:24:57.725 } 00:24:57.725 ] 00:24:57.725 }, 00:24:57.725 { 00:24:57.725 "subsystem": "sock", 00:24:57.725 "config": [ 00:24:57.725 { 00:24:57.725 "method": "sock_set_default_impl", 00:24:57.725 "params": { 00:24:57.725 "impl_name": "posix" 00:24:57.725 } 00:24:57.725 }, 00:24:57.725 { 00:24:57.725 "method": "sock_impl_set_options", 00:24:57.725 "params": { 00:24:57.725 "impl_name": "ssl", 00:24:57.725 "recv_buf_size": 4096, 00:24:57.725 "send_buf_size": 4096, 00:24:57.725 "enable_recv_pipe": true, 00:24:57.725 "enable_quickack": false, 00:24:57.725 "enable_placement_id": 0, 00:24:57.725 "enable_zerocopy_send_server": true, 00:24:57.725 "enable_zerocopy_send_client": false, 00:24:57.725 "zerocopy_threshold": 0, 00:24:57.725 "tls_version": 0, 00:24:57.725 "enable_ktls": false 00:24:57.725 } 00:24:57.725 }, 00:24:57.725 { 00:24:57.725 "method": "sock_impl_set_options", 00:24:57.725 "params": { 00:24:57.725 "impl_name": "posix", 00:24:57.725 "recv_buf_size": 2097152, 00:24:57.725 "send_buf_size": 2097152, 00:24:57.725 "enable_recv_pipe": true, 00:24:57.725 "enable_quickack": false, 00:24:57.725 "enable_placement_id": 0, 00:24:57.725 "enable_zerocopy_send_server": true, 00:24:57.725 "enable_zerocopy_send_client": false, 00:24:57.725 "zerocopy_threshold": 0, 00:24:57.725 "tls_version": 0, 00:24:57.725 "enable_ktls": false 00:24:57.725 } 00:24:57.725 } 00:24:57.725 ] 00:24:57.725 }, 00:24:57.725 { 00:24:57.725 "subsystem": "vmd", 00:24:57.725 "config": [] 00:24:57.725 }, 00:24:57.725 { 00:24:57.725 "subsystem": "accel", 00:24:57.725 "config": [ 00:24:57.725 { 00:24:57.725 "method": "accel_set_options", 00:24:57.725 "params": { 00:24:57.725 "small_cache_size": 128, 00:24:57.725 "large_cache_size": 16, 00:24:57.725 "task_count": 2048, 00:24:57.725 "sequence_count": 2048, 00:24:57.725 "buf_count": 2048 00:24:57.725 } 00:24:57.725 } 00:24:57.725 ] 00:24:57.725 }, 00:24:57.725 { 00:24:57.725 "subsystem": "bdev", 00:24:57.725 "config": [ 00:24:57.725 { 00:24:57.725 "method": "bdev_set_options", 00:24:57.725 "params": { 00:24:57.725 "bdev_io_pool_size": 65535, 00:24:57.725 "bdev_io_cache_size": 256, 00:24:57.725 "bdev_auto_examine": true, 00:24:57.725 "iobuf_small_cache_size": 128, 00:24:57.725 "iobuf_large_cache_size": 16 00:24:57.725 } 00:24:57.725 }, 00:24:57.725 { 00:24:57.725 "method": "bdev_raid_set_options", 00:24:57.725 "params": { 00:24:57.725 "process_window_size_kb": 1024 00:24:57.725 } 00:24:57.725 }, 00:24:57.725 { 00:24:57.725 "method": "bdev_iscsi_set_options", 00:24:57.725 "params": { 00:24:57.725 "timeout_sec": 30 00:24:57.725 } 00:24:57.725 }, 00:24:57.725 { 00:24:57.725 "method": "bdev_nvme_set_options", 00:24:57.725 "params": { 00:24:57.725 "action_on_timeout": "none", 00:24:57.725 "timeout_us": 0, 00:24:57.725 "timeout_admin_us": 0, 00:24:57.725 "keep_alive_timeout_ms": 10000, 00:24:57.725 "arbitration_burst": 0, 00:24:57.725 "low_priority_weight": 0, 00:24:57.725 "medium_priority_weight": 0, 00:24:57.725 "high_priority_weight": 0, 00:24:57.725 "nvme_adminq_poll_period_us": 10000, 00:24:57.725 "nvme_ioq_poll_period_us": 0, 00:24:57.725 "io_queue_requests": 0, 00:24:57.725 "delay_cmd_submit": true, 00:24:57.725 "transport_retry_count": 4, 00:24:57.725 "bdev_retry_count": 3, 00:24:57.725 "transport_ack_timeout": 0, 00:24:57.725 "ctrlr_loss_timeout_sec": 0, 00:24:57.725 "reconnect_delay_sec": 0, 00:24:57.725 "fast_io_fail_timeout_sec": 0, 00:24:57.725 "disable_auto_failback": false, 00:24:57.725 "generate_uuids": false, 00:24:57.725 "transport_tos": 0, 00:24:57.725 "nvme_error_stat": false, 00:24:57.725 "rdma_srq_size": 0, 00:24:57.725 "io_path_stat": false, 00:24:57.725 "allow_accel_sequence": false, 00:24:57.725 "rdma_max_cq_size": 0, 00:24:57.725 "rdma_cm_event_timeout_ms": 0, 00:24:57.725 "dhchap_digests": [ 00:24:57.725 "sha256", 00:24:57.725 "sha384", 00:24:57.725 "sha512" 00:24:57.725 ], 00:24:57.725 "dhchap_dhgroups": [ 00:24:57.725 "null", 00:24:57.725 "ffdhe2048", 00:24:57.725 "ffdhe3072", 00:24:57.725 "ffdhe4096", 00:24:57.725 "ffdhe6144", 00:24:57.725 "ffdhe8192" 00:24:57.725 ] 00:24:57.725 } 00:24:57.725 }, 00:24:57.725 { 00:24:57.725 "method": "bdev_nvme_set_hotplug", 00:24:57.725 "params": { 00:24:57.725 "period_us": 100000, 00:24:57.725 "enable": false 00:24:57.725 } 00:24:57.725 }, 00:24:57.725 { 00:24:57.725 "method": "bdev_malloc_create", 00:24:57.725 "params": { 00:24:57.725 "name": "malloc0", 00:24:57.725 "num_blocks": 8192, 00:24:57.725 "block_size": 4096, 00:24:57.725 "physical_block_size": 4096, 00:24:57.725 "uuid": "67c18e94-b21e-4507-88bf-adc480b7990c", 00:24:57.725 "optimal_io_boundary": 0 00:24:57.725 } 00:24:57.725 }, 00:24:57.725 { 00:24:57.725 "method": "bdev_wait_for_examine" 00:24:57.725 } 00:24:57.725 ] 00:24:57.725 }, 00:24:57.725 { 00:24:57.725 "subsystem": "nbd", 00:24:57.725 "config": [] 00:24:57.725 }, 00:24:57.725 { 00:24:57.725 "subsystem": "scheduler", 00:24:57.725 "config": [ 00:24:57.725 { 00:24:57.725 "method": "framework_set_scheduler", 00:24:57.725 "params": { 00:24:57.725 "name": "static" 00:24:57.725 } 00:24:57.725 } 00:24:57.725 ] 00:24:57.725 }, 00:24:57.725 { 00:24:57.725 "subsystem": "nvmf", 00:24:57.725 "config": [ 00:24:57.725 { 00:24:57.725 "method": "nvmf_set_config", 00:24:57.725 "params": { 00:24:57.725 "discovery_filter": "match_any", 00:24:57.725 "admin_cmd_passthru": { 00:24:57.725 "identify_ctrlr": false 00:24:57.725 } 00:24:57.725 } 00:24:57.725 }, 00:24:57.725 { 00:24:57.725 "method": "nvmf_set_max_subsystems", 00:24:57.725 "params": { 00:24:57.725 "max_subsystems": 1024 00:24:57.725 } 00:24:57.725 }, 00:24:57.725 { 00:24:57.725 "method": "nvmf_set_crdt", 00:24:57.725 "params": { 00:24:57.725 "crdt1": 0, 00:24:57.725 "crdt2": 0, 00:24:57.725 "crdt3": 0 00:24:57.725 } 00:24:57.725 }, 00:24:57.725 { 00:24:57.725 "method": "nvmf_create_transport", 00:24:57.725 "params": { 00:24:57.725 "trtype": "TCP", 00:24:57.725 "max_queue_depth": 128, 00:24:57.725 "max_io_qpairs_per_ctrlr": 127, 00:24:57.725 "in_capsule_data_size": 4096, 00:24:57.725 "max_io_size": 131072, 00:24:57.725 "io_unit_size": 131072, 00:24:57.725 "max_aq_depth": 128, 00:24:57.725 "num_shared_buffers": 511, 00:24:57.725 "buf_cache_size": 4294967295, 00:24:57.725 "dif_insert_or_strip": false, 00:24:57.725 "zcopy": false, 00:24:57.725 "c2h_success": false, 00:24:57.725 "sock_priority": 0, 00:24:57.725 "abort_timeout_sec": 1, 00:24:57.725 "ack_timeout": 0, 00:24:57.725 "data_wr_pool_size": 0 00:24:57.725 } 00:24:57.725 }, 00:24:57.725 { 00:24:57.725 "method": "nvmf_create_subsystem", 00:24:57.725 "params": { 00:24:57.725 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:57.725 "allow_any_host": false, 00:24:57.725 "serial_number": "00000000000000000000", 00:24:57.725 "model_number": "SPDK bdev Controller", 00:24:57.725 "max_namespaces": 32, 00:24:57.725 "min_cntlid": 1, 00:24:57.725 "max_cntlid": 65519, 00:24:57.725 "ana_reporting": false 00:24:57.725 } 00:24:57.725 }, 00:24:57.725 { 00:24:57.725 "method": "nvmf_subsystem_add_host", 00:24:57.725 "params": { 00:24:57.725 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:57.725 "host": "nqn.2016-06.io.spdk:host1", 00:24:57.725 "psk": "key0" 00:24:57.725 } 00:24:57.726 }, 00:24:57.726 { 00:24:57.726 "method": "nvmf_subsystem_add_ns", 00:24:57.726 "params": { 00:24:57.726 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:57.726 "namespace": { 00:24:57.726 "nsid": 1, 00:24:57.726 "bdev_name": "malloc0", 00:24:57.726 "nguid": "67C18E94B21E450788BFADC480B7990C", 00:24:57.726 "uuid": "67c18e94-b21e-4507-88bf-adc480b7990c", 00:24:57.726 "no_auto_visible": false 00:24:57.726 } 00:24:57.726 } 00:24:57.726 }, 00:24:57.726 { 00:24:57.726 "method": "nvmf_subsystem_add_listener", 00:24:57.726 "params": { 00:24:57.726 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:57.726 "listen_address": { 00:24:57.726 "trtype": "TCP", 00:24:57.726 "adrfam": "IPv4", 00:24:57.726 "traddr": "10.0.0.2", 00:24:57.726 "trsvcid": "4420" 00:24:57.726 }, 00:24:57.726 "secure_channel": true 00:24:57.726 } 00:24:57.726 } 00:24:57.726 ] 00:24:57.726 } 00:24:57.726 ] 00:24:57.726 }' 00:24:57.726 22:08:16 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:57.985 22:08:17 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:24:57.985 "subsystems": [ 00:24:57.985 { 00:24:57.985 "subsystem": "keyring", 00:24:57.985 "config": [ 00:24:57.985 { 00:24:57.985 "method": "keyring_file_add_key", 00:24:57.985 "params": { 00:24:57.985 "name": "key0", 00:24:57.985 "path": "/tmp/tmp.mdRZcftFe2" 00:24:57.985 } 00:24:57.985 } 00:24:57.985 ] 00:24:57.985 }, 00:24:57.985 { 00:24:57.985 "subsystem": "iobuf", 00:24:57.985 "config": [ 00:24:57.985 { 00:24:57.985 "method": "iobuf_set_options", 00:24:57.985 "params": { 00:24:57.985 "small_pool_count": 8192, 00:24:57.985 "large_pool_count": 1024, 00:24:57.985 "small_bufsize": 8192, 00:24:57.985 "large_bufsize": 135168 00:24:57.985 } 00:24:57.985 } 00:24:57.985 ] 00:24:57.985 }, 00:24:57.985 { 00:24:57.985 "subsystem": "sock", 00:24:57.985 "config": [ 00:24:57.985 { 00:24:57.985 "method": "sock_set_default_impl", 00:24:57.985 "params": { 00:24:57.985 "impl_name": "posix" 00:24:57.985 } 00:24:57.985 }, 00:24:57.985 { 00:24:57.985 "method": "sock_impl_set_options", 00:24:57.985 "params": { 00:24:57.985 "impl_name": "ssl", 00:24:57.985 "recv_buf_size": 4096, 00:24:57.985 "send_buf_size": 4096, 00:24:57.985 "enable_recv_pipe": true, 00:24:57.985 "enable_quickack": false, 00:24:57.985 "enable_placement_id": 0, 00:24:57.985 "enable_zerocopy_send_server": true, 00:24:57.985 "enable_zerocopy_send_client": false, 00:24:57.985 "zerocopy_threshold": 0, 00:24:57.985 "tls_version": 0, 00:24:57.985 "enable_ktls": false 00:24:57.985 } 00:24:57.985 }, 00:24:57.985 { 00:24:57.985 "method": "sock_impl_set_options", 00:24:57.985 "params": { 00:24:57.985 "impl_name": "posix", 00:24:57.985 "recv_buf_size": 2097152, 00:24:57.985 "send_buf_size": 2097152, 00:24:57.985 "enable_recv_pipe": true, 00:24:57.985 "enable_quickack": false, 00:24:57.985 "enable_placement_id": 0, 00:24:57.985 "enable_zerocopy_send_server": true, 00:24:57.985 "enable_zerocopy_send_client": false, 00:24:57.985 "zerocopy_threshold": 0, 00:24:57.985 "tls_version": 0, 00:24:57.985 "enable_ktls": false 00:24:57.985 } 00:24:57.985 } 00:24:57.985 ] 00:24:57.985 }, 00:24:57.985 { 00:24:57.985 "subsystem": "vmd", 00:24:57.985 "config": [] 00:24:57.985 }, 00:24:57.985 { 00:24:57.985 "subsystem": "accel", 00:24:57.985 "config": [ 00:24:57.985 { 00:24:57.985 "method": "accel_set_options", 00:24:57.985 "params": { 00:24:57.986 "small_cache_size": 128, 00:24:57.986 "large_cache_size": 16, 00:24:57.986 "task_count": 2048, 00:24:57.986 "sequence_count": 2048, 00:24:57.986 "buf_count": 2048 00:24:57.986 } 00:24:57.986 } 00:24:57.986 ] 00:24:57.986 }, 00:24:57.986 { 00:24:57.986 "subsystem": "bdev", 00:24:57.986 "config": [ 00:24:57.986 { 00:24:57.986 "method": "bdev_set_options", 00:24:57.986 "params": { 00:24:57.986 "bdev_io_pool_size": 65535, 00:24:57.986 "bdev_io_cache_size": 256, 00:24:57.986 "bdev_auto_examine": true, 00:24:57.986 "iobuf_small_cache_size": 128, 00:24:57.986 "iobuf_large_cache_size": 16 00:24:57.986 } 00:24:57.986 }, 00:24:57.986 { 00:24:57.986 "method": "bdev_raid_set_options", 00:24:57.986 "params": { 00:24:57.986 "process_window_size_kb": 1024 00:24:57.986 } 00:24:57.986 }, 00:24:57.986 { 00:24:57.986 "method": "bdev_iscsi_set_options", 00:24:57.986 "params": { 00:24:57.986 "timeout_sec": 30 00:24:57.986 } 00:24:57.986 }, 00:24:57.986 { 00:24:57.986 "method": "bdev_nvme_set_options", 00:24:57.986 "params": { 00:24:57.986 "action_on_timeout": "none", 00:24:57.986 "timeout_us": 0, 00:24:57.986 "timeout_admin_us": 0, 00:24:57.986 "keep_alive_timeout_ms": 10000, 00:24:57.986 "arbitration_burst": 0, 00:24:57.986 "low_priority_weight": 0, 00:24:57.986 "medium_priority_weight": 0, 00:24:57.986 "high_priority_weight": 0, 00:24:57.986 "nvme_adminq_poll_period_us": 10000, 00:24:57.986 "nvme_ioq_poll_period_us": 0, 00:24:57.986 "io_queue_requests": 512, 00:24:57.986 "delay_cmd_submit": true, 00:24:57.986 "transport_retry_count": 4, 00:24:57.986 "bdev_retry_count": 3, 00:24:57.986 "transport_ack_timeout": 0, 00:24:57.986 "ctrlr_loss_timeout_sec": 0, 00:24:57.986 "reconnect_delay_sec": 0, 00:24:57.986 "fast_io_fail_timeout_sec": 0, 00:24:57.986 "disable_auto_failback": false, 00:24:57.986 "generate_uuids": false, 00:24:57.986 "transport_tos": 0, 00:24:57.986 "nvme_error_stat": false, 00:24:57.986 "rdma_srq_size": 0, 00:24:57.986 "io_path_stat": false, 00:24:57.986 "allow_accel_sequence": false, 00:24:57.986 "rdma_max_cq_size": 0, 00:24:57.986 "rdma_cm_event_timeout_ms": 0, 00:24:57.986 "dhchap_digests": [ 00:24:57.986 "sha256", 00:24:57.986 "sha384", 00:24:57.986 "sha512" 00:24:57.986 ], 00:24:57.986 "dhchap_dhgroups": [ 00:24:57.986 "null", 00:24:57.986 "ffdhe2048", 00:24:57.986 "ffdhe3072", 00:24:57.986 "ffdhe4096", 00:24:57.986 "ffdhe6144", 00:24:57.986 "ffdhe8192" 00:24:57.986 ] 00:24:57.986 } 00:24:57.986 }, 00:24:57.986 { 00:24:57.986 "method": "bdev_nvme_attach_controller", 00:24:57.986 "params": { 00:24:57.986 "name": "nvme0", 00:24:57.986 "trtype": "TCP", 00:24:57.986 "adrfam": "IPv4", 00:24:57.986 "traddr": "10.0.0.2", 00:24:57.986 "trsvcid": "4420", 00:24:57.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:57.986 "prchk_reftag": false, 00:24:57.986 "prchk_guard": false, 00:24:57.986 "ctrlr_loss_timeout_sec": 0, 00:24:57.986 "reconnect_delay_sec": 0, 00:24:57.986 "fast_io_fail_timeout_sec": 0, 00:24:57.986 "psk": "key0", 00:24:57.986 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:57.986 "hdgst": false, 00:24:57.986 "ddgst": false 00:24:57.986 } 00:24:57.986 }, 00:24:57.986 { 00:24:57.986 "method": "bdev_nvme_set_hotplug", 00:24:57.986 "params": { 00:24:57.986 "period_us": 100000, 00:24:57.986 "enable": false 00:24:57.986 } 00:24:57.986 }, 00:24:57.986 { 00:24:57.986 "method": "bdev_enable_histogram", 00:24:57.986 "params": { 00:24:57.986 "name": "nvme0n1", 00:24:57.986 "enable": true 00:24:57.986 } 00:24:57.986 }, 00:24:57.986 { 00:24:57.986 "method": "bdev_wait_for_examine" 00:24:57.986 } 00:24:57.986 ] 00:24:57.986 }, 00:24:57.986 { 00:24:57.986 "subsystem": "nbd", 00:24:57.986 "config": [] 00:24:57.986 } 00:24:57.986 ] 00:24:57.986 }' 00:24:57.986 22:08:17 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 4122673 00:24:57.986 22:08:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4122673 ']' 00:24:57.986 22:08:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4122673 00:24:57.986 22:08:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:57.986 22:08:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:57.986 22:08:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4122673 00:24:57.986 22:08:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:57.986 22:08:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:57.986 22:08:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4122673' 00:24:57.986 killing process with pid 4122673 00:24:57.986 22:08:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4122673 00:24:57.986 Received shutdown signal, test time was about 1.000000 seconds 00:24:57.986 00:24:57.986 Latency(us) 00:24:57.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.986 =================================================================================================================== 00:24:57.986 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:57.986 22:08:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4122673 00:24:58.926 22:08:18 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 4122552 00:24:58.926 22:08:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4122552 ']' 00:24:58.926 22:08:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4122552 00:24:58.926 22:08:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:24:58.926 22:08:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:58.926 22:08:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4122552 00:24:58.926 22:08:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:58.926 22:08:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:58.926 22:08:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4122552' 00:24:58.926 killing process with pid 4122552 00:24:58.926 22:08:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4122552 00:24:58.926 22:08:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4122552 00:25:00.314 22:08:19 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:25:00.314 22:08:19 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:25:00.314 "subsystems": [ 00:25:00.314 { 00:25:00.314 "subsystem": "keyring", 00:25:00.314 "config": [ 00:25:00.314 { 00:25:00.314 "method": "keyring_file_add_key", 00:25:00.314 "params": { 00:25:00.314 "name": "key0", 00:25:00.314 "path": "/tmp/tmp.mdRZcftFe2" 00:25:00.314 } 00:25:00.314 } 00:25:00.314 ] 00:25:00.314 }, 00:25:00.314 { 00:25:00.314 "subsystem": "iobuf", 00:25:00.314 "config": [ 00:25:00.314 { 00:25:00.314 "method": "iobuf_set_options", 00:25:00.314 "params": { 00:25:00.314 "small_pool_count": 8192, 00:25:00.314 "large_pool_count": 1024, 00:25:00.314 "small_bufsize": 8192, 00:25:00.314 "large_bufsize": 135168 00:25:00.314 } 00:25:00.314 } 00:25:00.314 ] 00:25:00.314 }, 00:25:00.314 { 00:25:00.314 "subsystem": "sock", 00:25:00.314 "config": [ 00:25:00.314 { 00:25:00.314 "method": "sock_set_default_impl", 00:25:00.314 "params": { 00:25:00.314 "impl_name": "posix" 00:25:00.314 } 00:25:00.314 }, 00:25:00.314 { 00:25:00.314 "method": "sock_impl_set_options", 00:25:00.314 "params": { 00:25:00.314 "impl_name": "ssl", 00:25:00.314 "recv_buf_size": 4096, 00:25:00.314 "send_buf_size": 4096, 00:25:00.314 "enable_recv_pipe": true, 00:25:00.314 "enable_quickack": false, 00:25:00.314 "enable_placement_id": 0, 00:25:00.314 "enable_zerocopy_send_server": true, 00:25:00.314 "enable_zerocopy_send_client": false, 00:25:00.314 "zerocopy_threshold": 0, 00:25:00.314 "tls_version": 0, 00:25:00.314 "enable_ktls": false 00:25:00.314 } 00:25:00.314 }, 00:25:00.314 { 00:25:00.314 "method": "sock_impl_set_options", 00:25:00.314 "params": { 00:25:00.314 "impl_name": "posix", 00:25:00.314 "recv_buf_size": 2097152, 00:25:00.314 "send_buf_size": 2097152, 00:25:00.314 "enable_recv_pipe": true, 00:25:00.314 "enable_quickack": false, 00:25:00.314 "enable_placement_id": 0, 00:25:00.314 "enable_zerocopy_send_server": true, 00:25:00.314 "enable_zerocopy_send_client": false, 00:25:00.314 "zerocopy_threshold": 0, 00:25:00.314 "tls_version": 0, 00:25:00.314 "enable_ktls": false 00:25:00.314 } 00:25:00.314 } 00:25:00.314 ] 00:25:00.314 }, 00:25:00.314 { 00:25:00.314 "subsystem": "vmd", 00:25:00.314 "config": [] 00:25:00.314 }, 00:25:00.314 { 00:25:00.314 "subsystem": "accel", 00:25:00.314 "config": [ 00:25:00.314 { 00:25:00.314 "method": "accel_set_options", 00:25:00.314 "params": { 00:25:00.314 "small_cache_size": 128, 00:25:00.314 "large_cache_size": 16, 00:25:00.314 "task_count": 2048, 00:25:00.314 "sequence_count": 2048, 00:25:00.314 "buf_count": 2048 00:25:00.314 } 00:25:00.314 } 00:25:00.314 ] 00:25:00.314 }, 00:25:00.314 { 00:25:00.314 "subsystem": "bdev", 00:25:00.314 "config": [ 00:25:00.314 { 00:25:00.314 "method": "bdev_set_options", 00:25:00.314 "params": { 00:25:00.314 "bdev_io_pool_size": 65535, 00:25:00.314 "bdev_io_cache_size": 256, 00:25:00.314 "bdev_auto_examine": true, 00:25:00.314 "iobuf_small_cache_size": 128, 00:25:00.314 "iobuf_large_cache_size": 16 00:25:00.314 } 00:25:00.314 }, 00:25:00.314 { 00:25:00.314 "method": "bdev_raid_set_options", 00:25:00.314 "params": { 00:25:00.314 "process_window_size_kb": 1024 00:25:00.314 } 00:25:00.314 }, 00:25:00.314 { 00:25:00.314 "method": "bdev_iscsi_set_options", 00:25:00.314 "params": { 00:25:00.314 "timeout_sec": 30 00:25:00.314 } 00:25:00.314 }, 00:25:00.314 { 00:25:00.314 "method": "bdev_nvme_set_options", 00:25:00.314 "params": { 00:25:00.314 "action_on_timeout": "none", 00:25:00.314 "timeout_us": 0, 00:25:00.314 "timeout_admin_us": 0, 00:25:00.314 "keep_alive_timeout_ms": 10000, 00:25:00.314 "arbitration_burst": 0, 00:25:00.314 "low_priority_weight": 0, 00:25:00.314 "medium_priority_weight": 0, 00:25:00.314 "high_priority_weight": 0, 00:25:00.314 "nvme_adminq_poll_period_us": 10000, 00:25:00.314 "nvme_ioq_poll_period_us": 0, 00:25:00.314 "io_queue_requests": 0, 00:25:00.314 "delay_cmd_submit": true, 00:25:00.314 "transport_retry_count": 4, 00:25:00.314 "bdev_retry_count": 3, 00:25:00.314 "transport_ack_timeout": 0, 00:25:00.314 "ctrlr_loss_timeout_sec": 0, 00:25:00.314 "reconnect_delay_sec": 0, 00:25:00.314 "fast_io_fail_timeout_sec": 0, 00:25:00.314 "disable_auto_failback": false, 00:25:00.314 "generate_uuids": false, 00:25:00.314 "transport_tos": 0, 00:25:00.314 "nvme_error_stat": false, 00:25:00.314 "rdma_srq_size": 0, 00:25:00.314 "io_path_stat": false, 00:25:00.314 "allow_accel_sequence": false, 00:25:00.314 "rdma_max_cq_size": 0, 00:25:00.314 "rdma_cm_event_timeout_ms": 0, 00:25:00.314 "dhchap_digests": [ 00:25:00.314 "sha256", 00:25:00.314 "sha384", 00:25:00.314 "sha512" 00:25:00.314 ], 00:25:00.314 "dhchap_dhgroups": [ 00:25:00.314 "null", 00:25:00.314 "ffdhe2048", 00:25:00.314 "ffdhe3072", 00:25:00.314 "ffdhe4096", 00:25:00.314 "ffdhe6144", 00:25:00.314 "ffdhe8192" 00:25:00.314 ] 00:25:00.314 } 00:25:00.314 }, 00:25:00.314 { 00:25:00.314 "method": "bdev_nvme_set_hotplug", 00:25:00.314 "params": { 00:25:00.314 "period_us": 100000, 00:25:00.314 "enable": false 00:25:00.314 } 00:25:00.314 }, 00:25:00.314 { 00:25:00.314 "method": "bdev_malloc_create", 00:25:00.314 "params": { 00:25:00.314 "name": "malloc0", 00:25:00.314 "num_blocks": 8192, 00:25:00.314 "block_size": 4096, 00:25:00.314 "physical_block_size": 4096, 00:25:00.314 "uuid": "67c18e94-b21e-4507-88bf-adc480b7990c", 00:25:00.314 "optimal_io_boundary": 0 00:25:00.314 } 00:25:00.314 }, 00:25:00.314 { 00:25:00.314 "method": "bdev_wait_for_examine" 00:25:00.314 } 00:25:00.314 ] 00:25:00.314 }, 00:25:00.314 { 00:25:00.314 "subsystem": "nbd", 00:25:00.314 "config": [] 00:25:00.314 }, 00:25:00.314 { 00:25:00.314 "subsystem": "scheduler", 00:25:00.314 "config": [ 00:25:00.314 { 00:25:00.314 "method": "framework_set_scheduler", 00:25:00.314 "params": { 00:25:00.314 "name": "static" 00:25:00.314 } 00:25:00.314 } 00:25:00.314 ] 00:25:00.314 }, 00:25:00.314 { 00:25:00.314 "subsystem": "nvmf", 00:25:00.314 "config": [ 00:25:00.314 { 00:25:00.314 "method": "nvmf_set_config", 00:25:00.314 "params": { 00:25:00.314 "discovery_filter": "match_any", 00:25:00.314 "admin_cmd_passthru": { 00:25:00.314 "identify_ctrlr": false 00:25:00.314 } 00:25:00.314 } 00:25:00.314 }, 00:25:00.314 { 00:25:00.314 "method": "nvmf_set_max_subsystems", 00:25:00.314 "params": { 00:25:00.314 "max_subsystems": 1024 00:25:00.314 } 00:25:00.314 }, 00:25:00.314 { 00:25:00.314 "method": "nvmf_set_crdt", 00:25:00.314 "params": { 00:25:00.314 "crdt1": 0, 00:25:00.314 "crdt2": 0, 00:25:00.314 "crdt3": 0 00:25:00.314 } 00:25:00.314 }, 00:25:00.314 { 00:25:00.314 "method": "nvmf_create_transport", 00:25:00.314 "params": { 00:25:00.314 "trtype": "TCP", 00:25:00.314 "max_queue_depth": 128, 00:25:00.314 "max_io_qpairs_per_ctrlr": 127, 00:25:00.314 "in_capsule_data_size": 4096, 00:25:00.314 "max_io_size": 131072, 00:25:00.314 "io_unit_size": 131072, 00:25:00.314 "max_aq_depth": 128, 00:25:00.314 "num_shared_buffers": 511, 00:25:00.314 "buf_cache_size": 4294967295, 00:25:00.314 "dif_insert_or_strip": false, 00:25:00.314 "zcopy": false, 00:25:00.314 "c2h_success": false, 00:25:00.314 "sock_priority": 0, 00:25:00.314 "abort_timeout_sec": 1, 00:25:00.314 "ack_timeout": 0, 00:25:00.314 "data_wr_pool_size": 0 00:25:00.314 } 00:25:00.314 }, 00:25:00.314 { 00:25:00.314 "method": "nvmf_create_subsystem", 00:25:00.314 "params": { 00:25:00.314 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:00.314 "allow_any_host": false, 00:25:00.314 "serial_number": "00000000000000000000", 00:25:00.314 "model_number": "SPDK bdev Controller", 00:25:00.314 "max_namespaces": 32, 00:25:00.314 "min_cntlid": 1, 00:25:00.314 "max_cntlid": 65519, 00:25:00.314 "ana_reporting": false 00:25:00.314 } 00:25:00.314 }, 00:25:00.314 { 00:25:00.314 "method": "nvmf_subsystem_add_host", 00:25:00.314 "params": { 00:25:00.314 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:00.314 "host": "nqn.2016-06.io.spdk:host1", 00:25:00.314 "psk": "key0" 00:25:00.314 } 00:25:00.314 }, 00:25:00.314 { 00:25:00.314 "method": "nvmf_subsystem_add_ns", 00:25:00.314 "params": { 00:25:00.314 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:00.314 "namespace": { 00:25:00.314 "nsid": 1, 00:25:00.314 "bdev_name": "malloc0", 00:25:00.314 "nguid": "67C18E94B21E450788BFADC480B7990C", 00:25:00.314 "uuid": "67c18e94-b21e-4507-88bf-adc480b7990c", 00:25:00.314 "no_auto_visible": false 00:25:00.314 } 00:25:00.314 } 00:25:00.314 }, 00:25:00.314 { 00:25:00.314 "method": "nvmf_subsystem_add_listener", 00:25:00.314 "params": { 00:25:00.314 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:00.314 "listen_address": { 00:25:00.314 "trtype": "TCP", 00:25:00.314 "adrfam": "IPv4", 00:25:00.314 "traddr": "10.0.0.2", 00:25:00.314 "trsvcid": "4420" 00:25:00.314 }, 00:25:00.314 "secure_channel": true 00:25:00.314 } 00:25:00.314 } 00:25:00.314 ] 00:25:00.314 } 00:25:00.314 ] 00:25:00.314 }' 00:25:00.314 22:08:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:00.314 22:08:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:00.314 22:08:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:00.314 22:08:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4123450 00:25:00.314 22:08:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:00.314 22:08:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4123450 00:25:00.314 22:08:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4123450 ']' 00:25:00.314 22:08:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.314 22:08:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:00.314 22:08:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.314 22:08:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:00.314 22:08:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:00.630 [2024-07-13 22:08:19.775879] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:00.630 [2024-07-13 22:08:19.776045] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.630 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.630 [2024-07-13 22:08:19.917574] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.890 [2024-07-13 22:08:20.151424] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.890 [2024-07-13 22:08:20.151505] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.890 [2024-07-13 22:08:20.151546] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.890 [2024-07-13 22:08:20.151569] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.890 [2024-07-13 22:08:20.151589] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.890 [2024-07-13 22:08:20.151747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.456 [2024-07-13 22:08:20.680892] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.456 [2024-07-13 22:08:20.712889] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:01.456 [2024-07-13 22:08:20.713165] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:01.456 22:08:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:01.456 22:08:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:01.456 22:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:01.456 22:08:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:01.456 22:08:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:01.456 22:08:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:01.456 22:08:20 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=4123554 00:25:01.456 22:08:20 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 4123554 /var/tmp/bdevperf.sock 00:25:01.456 22:08:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 4123554 ']' 00:25:01.456 22:08:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:01.456 22:08:20 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:01.456 22:08:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:01.456 22:08:20 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:25:01.456 "subsystems": [ 00:25:01.456 { 00:25:01.456 "subsystem": "keyring", 00:25:01.456 "config": [ 00:25:01.456 { 00:25:01.456 "method": "keyring_file_add_key", 00:25:01.456 "params": { 00:25:01.456 "name": "key0", 00:25:01.456 "path": "/tmp/tmp.mdRZcftFe2" 00:25:01.456 } 00:25:01.456 } 00:25:01.456 ] 00:25:01.456 }, 00:25:01.456 { 00:25:01.456 "subsystem": "iobuf", 00:25:01.456 "config": [ 00:25:01.456 { 00:25:01.456 "method": "iobuf_set_options", 00:25:01.456 "params": { 00:25:01.456 "small_pool_count": 8192, 00:25:01.456 "large_pool_count": 1024, 00:25:01.456 "small_bufsize": 8192, 00:25:01.456 "large_bufsize": 135168 00:25:01.456 } 00:25:01.456 } 00:25:01.456 ] 00:25:01.456 }, 00:25:01.456 { 00:25:01.456 "subsystem": "sock", 00:25:01.456 "config": [ 00:25:01.456 { 00:25:01.456 "method": "sock_set_default_impl", 00:25:01.456 "params": { 00:25:01.456 "impl_name": "posix" 00:25:01.456 } 00:25:01.456 }, 00:25:01.456 { 00:25:01.456 "method": "sock_impl_set_options", 00:25:01.456 "params": { 00:25:01.456 "impl_name": "ssl", 00:25:01.456 "recv_buf_size": 4096, 00:25:01.456 "send_buf_size": 4096, 00:25:01.456 "enable_recv_pipe": true, 00:25:01.456 "enable_quickack": false, 00:25:01.456 "enable_placement_id": 0, 00:25:01.456 "enable_zerocopy_send_server": true, 00:25:01.456 "enable_zerocopy_send_client": false, 00:25:01.456 "zerocopy_threshold": 0, 00:25:01.456 "tls_version": 0, 00:25:01.456 "enable_ktls": false 00:25:01.456 } 00:25:01.456 }, 00:25:01.456 { 00:25:01.456 "method": "sock_impl_set_options", 00:25:01.456 "params": { 00:25:01.456 "impl_name": "posix", 00:25:01.456 "recv_buf_size": 2097152, 00:25:01.456 "send_buf_size": 2097152, 00:25:01.456 "enable_recv_pipe": true, 00:25:01.456 "enable_quickack": false, 00:25:01.456 "enable_placement_id": 0, 00:25:01.456 "enable_zerocopy_send_server": true, 00:25:01.456 "enable_zerocopy_send_client": false, 00:25:01.456 "zerocopy_threshold": 0, 00:25:01.456 "tls_version": 0, 00:25:01.456 "enable_ktls": false 00:25:01.456 } 00:25:01.456 } 00:25:01.456 ] 00:25:01.456 }, 00:25:01.456 { 00:25:01.456 "subsystem": "vmd", 00:25:01.456 "config": [] 00:25:01.456 }, 00:25:01.456 { 00:25:01.456 "subsystem": "accel", 00:25:01.456 "config": [ 00:25:01.456 { 00:25:01.456 "method": "accel_set_options", 00:25:01.456 "params": { 00:25:01.456 "small_cache_size": 128, 00:25:01.456 "large_cache_size": 16, 00:25:01.456 "task_count": 2048, 00:25:01.456 "sequence_count": 2048, 00:25:01.456 "buf_count": 2048 00:25:01.456 } 00:25:01.456 } 00:25:01.456 ] 00:25:01.456 }, 00:25:01.456 { 00:25:01.456 "subsystem": "bdev", 00:25:01.456 "config": [ 00:25:01.456 { 00:25:01.456 "method": "bdev_set_options", 00:25:01.456 "params": { 00:25:01.456 "bdev_io_pool_size": 65535, 00:25:01.456 "bdev_io_cache_size": 256, 00:25:01.456 "bdev_auto_examine": true, 00:25:01.456 "iobuf_small_cache_size": 128, 00:25:01.456 "iobuf_large_cache_size": 16 00:25:01.456 } 00:25:01.456 }, 00:25:01.456 { 00:25:01.456 "method": "bdev_raid_set_options", 00:25:01.456 "params": { 00:25:01.456 "process_window_size_kb": 1024 00:25:01.456 } 00:25:01.456 }, 00:25:01.456 { 00:25:01.456 "method": "bdev_iscsi_set_options", 00:25:01.456 "params": { 00:25:01.456 "timeout_sec": 30 00:25:01.456 } 00:25:01.456 }, 00:25:01.456 { 00:25:01.456 "method": "bdev_nvme_set_options", 00:25:01.456 "params": { 00:25:01.456 "action_on_timeout": "none", 00:25:01.456 "timeout_us": 0, 00:25:01.456 "timeout_admin_us": 0, 00:25:01.456 "keep_alive_timeout_ms": 10000, 00:25:01.456 "arbitration_burst": 0, 00:25:01.456 "low_priority_weight": 0, 00:25:01.457 "medium_priority_weight": 0, 00:25:01.457 "high_priority_weight": 0, 00:25:01.457 "nvme_adminq_poll_period_us": 10000, 00:25:01.457 "nvme_ioq_poll_period_us": 0, 00:25:01.457 "io_queue_requests": 512, 00:25:01.457 "delay_cmd_submit": true, 00:25:01.457 "transport_retry_count": 4, 00:25:01.457 "bdev_retry_count": 3, 00:25:01.457 "transport_ack_timeout": 0, 00:25:01.457 "ctrlr_loss_timeout_sec": 0, 00:25:01.457 "reconnect_delay_sec": 0, 00:25:01.457 "fast_io_fail_timeout_sec": 0, 00:25:01.457 "disable_auto_failback": false, 00:25:01.457 "generate_uuids": false, 00:25:01.457 "transport_tos": 0, 00:25:01.457 "nvme_error_stat": false, 00:25:01.457 "rdma_srq_size": 0, 00:25:01.457 "io_path_stat": false, 00:25:01.457 "allow_accel_sequence": false, 00:25:01.457 "rdma_max_cq_size": 0, 00:25:01.457 "rdma_cm_event_timeout_ms": 0, 00:25:01.457 "dhchap_digests": [ 00:25:01.457 "sha256", 00:25:01.457 "sha384", 00:25:01.457 "sha512" 00:25:01.457 ], 00:25:01.457 "dhchap_dhgroups": [ 00:25:01.457 "null", 00:25:01.457 "ffdhe2048", 00:25:01.457 "ffdhe3072", 00:25:01.457 "ffdhe4096", 00:25:01.457 "ffdhe6144", 00:25:01.457 "ffdhe8192" 00:25:01.457 ] 00:25:01.457 } 00:25:01.457 }, 00:25:01.457 { 00:25:01.457 "method": "bdev_nvme_attach_controller", 00:25:01.457 "params": { 00:25:01.457 "name": "nvme0", 00:25:01.457 "trtype": "TCP", 00:25:01.457 "adrfam": "IPv4", 00:25:01.457 "traddr": "10.0.0.2", 00:25:01.457 "trsvcid": "4420", 00:25:01.457 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:01.457 "prchk_reftag": false, 00:25:01.457 "prchk_guard": false, 00:25:01.457 "ctrlr_loss_timeout_sec": 0, 00:25:01.457 "reconnect_delay_sec": 0, 00:25:01.457 "fast_io_fail_timeout_sec": 0, 00:25:01.457 "psk": "key0", 00:25:01.457 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:01.457 "hdgst": false, 00:25:01.457 "ddgst": false 00:25:01.457 } 00:25:01.457 }, 00:25:01.457 { 00:25:01.457 "method": "bdev_nvme_set_hotplug", 00:25:01.457 "params": { 00:25:01.457 "period_us": 100000, 00:25:01.457 "enable": false 00:25:01.457 } 00:25:01.457 }, 00:25:01.457 { 00:25:01.457 "method": "bdev_enable_histogram", 00:25:01.457 "params": { 00:25:01.457 "name": "nvme0n1", 00:25:01.457 "enable": true 00:25:01.457 } 00:25:01.457 }, 00:25:01.457 { 00:25:01.457 "method": "bdev_wait_for_examine" 00:25:01.457 } 00:25:01.457 ] 00:25:01.457 }, 00:25:01.457 { 00:25:01.457 "subsystem": "nbd", 00:25:01.457 "config": [] 00:25:01.457 } 00:25:01.457 ] 00:25:01.457 }' 00:25:01.457 22:08:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:01.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:01.457 22:08:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:01.457 22:08:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:01.457 [2024-07-13 22:08:20.842313] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:01.457 [2024-07-13 22:08:20.842450] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4123554 ] 00:25:01.715 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.715 [2024-07-13 22:08:20.969253] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.973 [2024-07-13 22:08:21.222072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.539 [2024-07-13 22:08:21.631304] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:02.539 22:08:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:02.539 22:08:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:25:02.539 22:08:21 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:02.539 22:08:21 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:25:02.797 22:08:22 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.797 22:08:22 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:02.797 Running I/O for 1 seconds... 00:25:04.169 00:25:04.169 Latency(us) 00:25:04.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.169 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:04.169 Verification LBA range: start 0x0 length 0x2000 00:25:04.169 nvme0n1 : 1.06 2097.61 8.19 0.00 0.00 59548.25 7912.87 86604.61 00:25:04.169 =================================================================================================================== 00:25:04.169 Total : 2097.61 8.19 0.00 0.00 59548.25 7912.87 86604.61 00:25:04.169 0 00:25:04.169 22:08:23 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:25:04.169 22:08:23 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:25:04.169 22:08:23 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:25:04.169 22:08:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:25:04.169 22:08:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:25:04.169 22:08:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:25:04.169 22:08:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:04.169 22:08:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:25:04.169 22:08:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:25:04.169 22:08:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:25:04.169 22:08:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:04.169 nvmf_trace.0 00:25:04.169 22:08:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:25:04.169 22:08:23 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 4123554 00:25:04.169 22:08:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4123554 ']' 00:25:04.169 22:08:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4123554 00:25:04.169 22:08:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:04.169 22:08:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:04.169 22:08:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4123554 00:25:04.169 22:08:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:04.169 22:08:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:04.170 22:08:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4123554' 00:25:04.170 killing process with pid 4123554 00:25:04.170 22:08:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4123554 00:25:04.170 Received shutdown signal, test time was about 1.000000 seconds 00:25:04.170 00:25:04.170 Latency(us) 00:25:04.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.170 =================================================================================================================== 00:25:04.170 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:04.170 22:08:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4123554 00:25:05.103 22:08:24 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:25:05.103 22:08:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:05.103 22:08:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:25:05.103 22:08:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:05.103 22:08:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:25:05.103 22:08:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:05.103 22:08:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:05.103 rmmod nvme_tcp 00:25:05.103 rmmod nvme_fabrics 00:25:05.103 rmmod nvme_keyring 00:25:05.103 22:08:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:05.103 22:08:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:25:05.103 22:08:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:25:05.103 22:08:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 4123450 ']' 00:25:05.103 22:08:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 4123450 00:25:05.103 22:08:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 4123450 ']' 00:25:05.103 22:08:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 4123450 00:25:05.103 22:08:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:25:05.103 22:08:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:05.103 22:08:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4123450 00:25:05.103 22:08:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:05.103 22:08:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:05.103 22:08:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4123450' 00:25:05.103 killing process with pid 4123450 00:25:05.103 22:08:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 4123450 00:25:05.103 22:08:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 4123450 00:25:06.476 22:08:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:06.733 22:08:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:06.733 22:08:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:06.733 22:08:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:06.733 22:08:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:06.733 22:08:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.733 22:08:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:06.733 22:08:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.633 22:08:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:08.633 22:08:27 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ztHFgK0Ktg /tmp/tmp.VsM9DKYJU3 /tmp/tmp.mdRZcftFe2 00:25:08.633 00:25:08.633 real 1m50.037s 00:25:08.633 user 2m54.062s 00:25:08.633 sys 0m28.886s 00:25:08.633 22:08:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:08.633 22:08:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:08.633 ************************************ 00:25:08.633 END TEST nvmf_tls 00:25:08.633 ************************************ 00:25:08.633 22:08:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:08.633 22:08:27 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:08.633 22:08:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:08.633 22:08:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:08.633 22:08:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:08.633 ************************************ 00:25:08.633 START TEST nvmf_fips 00:25:08.633 ************************************ 00:25:08.633 22:08:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:08.633 * Looking for test storage... 00:25:08.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:25:08.633 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:08.891 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:08.891 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:08.891 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:25:08.892 Error setting digest 00:25:08.892 00924393D77F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:25:08.892 00924393D77F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:25:08.892 22:08:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:10.795 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:10.795 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:10.795 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:10.796 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:10.796 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:10.796 22:08:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:10.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:10.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:25:10.796 00:25:10.796 --- 10.0.0.2 ping statistics --- 00:25:10.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.796 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:10.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:10.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:25:10.796 00:25:10.796 --- 10.0.0.1 ping statistics --- 00:25:10.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.796 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=4126099 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 4126099 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 4126099 ']' 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:10.796 22:08:30 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:11.055 [2024-07-13 22:08:30.278928] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:11.055 [2024-07-13 22:08:30.279059] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.055 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.055 [2024-07-13 22:08:30.421508] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.313 [2024-07-13 22:08:30.678187] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.313 [2024-07-13 22:08:30.678298] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.313 [2024-07-13 22:08:30.678330] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:11.313 [2024-07-13 22:08:30.678350] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:11.313 [2024-07-13 22:08:30.678371] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.313 [2024-07-13 22:08:30.678426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.879 22:08:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:11.879 22:08:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:25:11.879 22:08:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:11.879 22:08:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:11.879 22:08:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:11.879 22:08:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:11.879 22:08:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:25:11.879 22:08:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:11.879 22:08:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:11.879 22:08:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:11.879 22:08:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:11.879 22:08:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:11.879 22:08:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:11.879 22:08:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:12.137 [2024-07-13 22:08:31.415923] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.137 [2024-07-13 22:08:31.431884] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:12.137 [2024-07-13 22:08:31.432148] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:12.137 [2024-07-13 22:08:31.502294] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:12.137 malloc0 00:25:12.137 22:08:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:12.137 22:08:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=4126253 00:25:12.137 22:08:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:12.137 22:08:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 4126253 /var/tmp/bdevperf.sock 00:25:12.137 22:08:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 4126253 ']' 00:25:12.137 22:08:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:12.137 22:08:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:12.137 22:08:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:12.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:12.137 22:08:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:12.137 22:08:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:12.396 [2024-07-13 22:08:31.640731] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:12.396 [2024-07-13 22:08:31.640889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4126253 ] 00:25:12.396 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.396 [2024-07-13 22:08:31.767562] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.654 [2024-07-13 22:08:32.000362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:13.219 22:08:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:13.219 22:08:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:25:13.219 22:08:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:13.476 [2024-07-13 22:08:32.784947] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:13.476 [2024-07-13 22:08:32.785139] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:13.733 TLSTESTn1 00:25:13.733 22:08:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:13.733 Running I/O for 10 seconds... 00:25:23.732 00:25:23.732 Latency(us) 00:25:23.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.732 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:23.732 Verification LBA range: start 0x0 length 0x2000 00:25:23.732 TLSTESTn1 : 10.05 2166.78 8.46 0.00 0.00 58898.04 10048.85 91264.95 00:25:23.732 =================================================================================================================== 00:25:23.732 Total : 2166.78 8.46 0.00 0.00 58898.04 10048.85 91264.95 00:25:23.732 0 00:25:23.732 22:08:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:23.732 22:08:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:23.732 22:08:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:25:23.732 22:08:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:25:23.732 22:08:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:25:23.732 22:08:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:23.732 22:08:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:25:23.732 22:08:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:25:23.732 22:08:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:25:23.732 22:08:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:23.732 nvmf_trace.0 00:25:23.991 22:08:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:25:23.991 22:08:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 4126253 00:25:23.991 22:08:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 4126253 ']' 00:25:23.991 22:08:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 4126253 00:25:23.991 22:08:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:25:23.991 22:08:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:23.991 22:08:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4126253 00:25:23.991 22:08:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:23.991 22:08:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:23.991 22:08:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4126253' 00:25:23.991 killing process with pid 4126253 00:25:23.991 22:08:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 4126253 00:25:23.991 Received shutdown signal, test time was about 10.000000 seconds 00:25:23.991 00:25:23.991 Latency(us) 00:25:23.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.991 =================================================================================================================== 00:25:23.991 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:23.991 [2024-07-13 22:08:43.212716] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:23.991 22:08:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 4126253 00:25:24.926 22:08:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:24.926 22:08:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:24.926 22:08:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:25:24.926 22:08:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:24.926 22:08:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:25:24.926 22:08:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:24.926 22:08:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:24.926 rmmod nvme_tcp 00:25:24.926 rmmod nvme_fabrics 00:25:24.926 rmmod nvme_keyring 00:25:24.926 22:08:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:24.926 22:08:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:25:24.926 22:08:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:25:24.926 22:08:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 4126099 ']' 00:25:24.926 22:08:44 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 4126099 00:25:24.926 22:08:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 4126099 ']' 00:25:24.926 22:08:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 4126099 00:25:24.926 22:08:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:25:24.926 22:08:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:24.926 22:08:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4126099 00:25:24.926 22:08:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:24.926 22:08:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:24.926 22:08:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4126099' 00:25:24.926 killing process with pid 4126099 00:25:24.926 22:08:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 4126099 00:25:24.926 [2024-07-13 22:08:44.304056] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:24.926 22:08:44 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 4126099 00:25:26.852 22:08:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:26.852 22:08:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:26.852 22:08:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:26.852 22:08:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:26.852 22:08:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:26.852 22:08:45 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.852 22:08:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:26.852 22:08:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:25:28.752 00:25:28.752 real 0m19.863s 00:25:28.752 user 0m26.361s 00:25:28.752 sys 0m5.763s 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:28.752 ************************************ 00:25:28.752 END TEST nvmf_fips 00:25:28.752 ************************************ 00:25:28.752 22:08:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:28.752 22:08:47 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:25:28.752 22:08:47 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:28.752 22:08:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:28.752 22:08:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:28.752 22:08:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:28.752 ************************************ 00:25:28.752 START TEST nvmf_fuzz 00:25:28.752 ************************************ 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:28.752 * Looking for test storage... 00:25:28.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:25:28.752 22:08:47 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:30.653 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:30.653 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:30.653 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.653 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:30.654 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:30.654 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:30.654 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:30.654 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:30.654 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:30.654 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:30.654 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:30.654 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:30.654 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:30.654 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:30.654 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:25:30.654 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:30.654 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:30.654 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:30.654 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:30.654 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:30.654 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:30.654 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:30.654 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:30.654 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:30.654 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:30.654 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:30.654 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:30.654 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:30.654 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:30.654 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:30.654 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:30.913 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:30.913 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:30.913 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:30.913 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:30.913 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:30.913 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:30.913 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:30.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:30.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:25:30.913 00:25:30.913 --- 10.0.0.2 ping statistics --- 00:25:30.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.913 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:25:30.913 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:30.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:30.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:25:30.913 00:25:30.913 --- 10.0.0.1 ping statistics --- 00:25:30.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.913 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:25:30.913 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:30.913 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:25:30.913 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:30.913 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:30.913 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:30.913 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:30.913 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:30.913 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:30.913 22:08:50 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:30.913 22:08:50 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=4129786 00:25:30.913 22:08:50 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:30.913 22:08:50 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:30.913 22:08:50 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 4129786 00:25:30.913 22:08:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 4129786 ']' 00:25:30.913 22:08:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.913 22:08:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:30.913 22:08:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.913 22:08:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:30.913 22:08:50 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:32.288 22:08:51 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:32.288 22:08:51 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:25:32.289 22:08:51 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:32.289 22:08:51 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.289 22:08:51 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:32.289 22:08:51 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.289 22:08:51 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:32.289 22:08:51 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.289 22:08:51 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:32.289 Malloc0 00:25:32.289 22:08:51 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.289 22:08:51 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:32.289 22:08:51 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.289 22:08:51 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:32.289 22:08:51 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.289 22:08:51 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:32.289 22:08:51 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.289 22:08:51 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:32.289 22:08:51 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.289 22:08:51 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:32.289 22:08:51 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.289 22:08:51 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:32.289 22:08:51 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.289 22:08:51 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:32.289 22:08:51 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:26:04.355 Fuzzing completed. Shutting down the fuzz application 00:26:04.355 00:26:04.355 Dumping successful admin opcodes: 00:26:04.355 8, 9, 10, 24, 00:26:04.355 Dumping successful io opcodes: 00:26:04.355 0, 9, 00:26:04.355 NS: 0x200003aefec0 I/O qp, Total commands completed: 287041, total successful commands: 1701, random_seed: 862959744 00:26:04.355 NS: 0x200003aefec0 admin qp, Total commands completed: 36160, total successful commands: 305, random_seed: 3533557952 00:26:04.355 22:09:22 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:26:05.731 Fuzzing completed. Shutting down the fuzz application 00:26:05.731 00:26:05.731 Dumping successful admin opcodes: 00:26:05.731 24, 00:26:05.731 Dumping successful io opcodes: 00:26:05.731 00:26:05.731 NS: 0x200003aefec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 948026758 00:26:05.731 NS: 0x200003aefec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 948293970 00:26:05.731 22:09:24 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:05.731 22:09:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.731 22:09:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:05.731 22:09:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.731 22:09:24 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:05.731 22:09:24 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:26:05.731 22:09:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:05.731 22:09:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:26:05.731 22:09:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:05.731 22:09:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:26:05.731 22:09:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:05.731 22:09:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:05.731 rmmod nvme_tcp 00:26:05.731 rmmod nvme_fabrics 00:26:05.731 rmmod nvme_keyring 00:26:05.731 22:09:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:05.731 22:09:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:26:05.731 22:09:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:26:05.731 22:09:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 4129786 ']' 00:26:05.731 22:09:24 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 4129786 00:26:05.731 22:09:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 4129786 ']' 00:26:05.731 22:09:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 4129786 00:26:05.731 22:09:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:26:05.731 22:09:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:05.731 22:09:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4129786 00:26:05.731 22:09:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:05.731 22:09:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:05.731 22:09:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4129786' 00:26:05.731 killing process with pid 4129786 00:26:05.731 22:09:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 4129786 00:26:05.731 22:09:24 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 4129786 00:26:07.108 22:09:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:07.108 22:09:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:07.108 22:09:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:07.108 22:09:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:07.108 22:09:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:07.108 22:09:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.108 22:09:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:07.108 22:09:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.052 22:09:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:09.052 22:09:28 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:26:09.052 00:26:09.052 real 0m40.501s 00:26:09.052 user 0m56.057s 00:26:09.052 sys 0m13.629s 00:26:09.052 22:09:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:09.052 22:09:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:09.052 ************************************ 00:26:09.052 END TEST nvmf_fuzz 00:26:09.052 ************************************ 00:26:09.052 22:09:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:09.052 22:09:28 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:09.052 22:09:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:09.052 22:09:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:09.052 22:09:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:09.052 ************************************ 00:26:09.052 START TEST nvmf_multiconnection 00:26:09.052 ************************************ 00:26:09.052 22:09:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:09.311 * Looking for test storage... 00:26:09.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:26:09.311 22:09:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:11.210 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:11.211 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:11.211 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:11.211 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:11.211 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:11.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:11.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:26:11.211 00:26:11.211 --- 10.0.0.2 ping statistics --- 00:26:11.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.211 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:11.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:11.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:26:11.211 00:26:11.211 --- 10.0.0.1 ping statistics --- 00:26:11.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.211 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=4136464 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 4136464 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 4136464 ']' 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:11.211 22:09:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.212 22:09:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:11.212 22:09:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:11.212 [2024-07-13 22:09:30.600474] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:11.212 [2024-07-13 22:09:30.600635] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:11.469 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.469 [2024-07-13 22:09:30.739226] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:11.727 [2024-07-13 22:09:30.999362] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:11.727 [2024-07-13 22:09:30.999444] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:11.727 [2024-07-13 22:09:30.999472] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:11.727 [2024-07-13 22:09:30.999493] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:11.727 [2024-07-13 22:09:30.999514] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:11.727 [2024-07-13 22:09:30.999639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.727 [2024-07-13 22:09:30.999706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:11.727 [2024-07-13 22:09:30.999726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.727 [2024-07-13 22:09:30.999741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:12.292 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:12.292 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:26:12.292 22:09:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:12.292 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:12.292 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.292 22:09:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:12.292 22:09:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:12.292 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.292 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.293 [2024-07-13 22:09:31.570382] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:12.293 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.293 22:09:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:26:12.293 22:09:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:12.293 22:09:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:12.293 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.293 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.293 Malloc1 00:26:12.293 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.293 22:09:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:12.293 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.293 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.293 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.293 22:09:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:12.293 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.293 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.293 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.293 22:09:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:12.293 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.293 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.293 [2024-07-13 22:09:31.679811] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:12.293 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.293 22:09:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:12.293 22:09:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:12.293 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.293 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.551 Malloc2 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.551 Malloc3 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.551 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.811 Malloc4 00:26:12.811 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.811 22:09:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:12.811 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.811 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.811 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.811 22:09:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:12.811 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.811 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.811 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.811 22:09:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:12.811 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.811 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.811 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.811 22:09:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:12.811 22:09:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:12.811 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.811 22:09:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.811 Malloc5 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.811 Malloc6 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:12.811 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.812 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.069 Malloc7 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.069 Malloc8 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.069 Malloc9 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.069 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.352 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.352 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:13.352 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.352 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.352 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.352 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:26:13.352 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.352 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.352 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.352 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:13.352 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:13.352 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.352 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.352 Malloc10 00:26:13.352 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.352 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:13.352 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.352 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.352 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.352 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:13.352 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.352 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.352 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.352 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:26:13.352 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.352 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.352 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.352 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:13.353 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:13.353 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.353 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.353 Malloc11 00:26:13.353 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.353 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:13.353 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.353 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.353 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.353 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:13.353 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.353 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.353 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.353 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:26:13.353 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.353 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.353 22:09:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.353 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:26:13.353 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:13.353 22:09:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:14.286 22:09:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:14.286 22:09:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:14.286 22:09:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:14.286 22:09:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:14.286 22:09:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:16.195 22:09:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:16.195 22:09:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:16.195 22:09:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:26:16.195 22:09:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:16.195 22:09:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:16.195 22:09:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:16.195 22:09:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:16.195 22:09:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:16.760 22:09:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:16.760 22:09:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:16.760 22:09:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:16.760 22:09:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:16.760 22:09:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:19.286 22:09:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:19.286 22:09:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:19.286 22:09:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:26:19.286 22:09:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:19.286 22:09:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:19.286 22:09:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:19.286 22:09:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:19.286 22:09:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:19.544 22:09:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:19.544 22:09:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:19.544 22:09:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:19.544 22:09:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:19.544 22:09:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:21.444 22:09:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:21.444 22:09:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:21.444 22:09:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:26:21.444 22:09:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:21.444 22:09:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:21.444 22:09:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:21.444 22:09:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:21.444 22:09:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:22.379 22:09:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:22.379 22:09:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:22.379 22:09:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:22.379 22:09:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:22.379 22:09:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:24.277 22:09:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:24.277 22:09:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:24.277 22:09:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:26:24.277 22:09:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:24.277 22:09:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:24.277 22:09:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:24.277 22:09:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:24.277 22:09:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:25.209 22:09:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:25.209 22:09:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:25.209 22:09:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:25.209 22:09:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:25.209 22:09:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:27.172 22:09:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:27.172 22:09:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:27.172 22:09:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:26:27.172 22:09:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:27.172 22:09:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:27.172 22:09:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:27.172 22:09:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:27.172 22:09:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:28.108 22:09:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:28.109 22:09:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:28.109 22:09:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:28.109 22:09:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:28.109 22:09:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:30.008 22:09:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:30.008 22:09:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:30.008 22:09:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:26:30.008 22:09:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:30.008 22:09:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:30.008 22:09:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:30.008 22:09:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.008 22:09:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:30.575 22:09:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:30.575 22:09:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:30.575 22:09:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:30.575 22:09:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:30.575 22:09:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:33.105 22:09:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:33.105 22:09:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:33.105 22:09:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:26:33.105 22:09:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:33.105 22:09:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:33.105 22:09:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:33.105 22:09:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:33.105 22:09:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:33.670 22:09:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:33.670 22:09:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:33.670 22:09:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:33.670 22:09:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:33.670 22:09:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:35.567 22:09:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:35.567 22:09:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:35.567 22:09:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:26:35.567 22:09:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:35.567 22:09:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:35.567 22:09:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:35.567 22:09:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:35.567 22:09:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:36.499 22:09:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:36.499 22:09:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:36.499 22:09:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:36.499 22:09:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:36.499 22:09:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:38.393 22:09:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:38.393 22:09:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:38.393 22:09:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:26:38.393 22:09:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:38.393 22:09:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:38.393 22:09:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:38.393 22:09:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:38.393 22:09:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:39.765 22:09:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:39.765 22:09:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:39.765 22:09:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:39.765 22:09:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:39.765 22:09:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:41.663 22:10:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:41.663 22:10:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:41.663 22:10:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:26:41.663 22:10:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:41.663 22:10:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:41.663 22:10:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:41.663 22:10:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:41.663 22:10:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:42.597 22:10:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:42.597 22:10:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:26:42.597 22:10:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:26:42.597 22:10:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:26:42.597 22:10:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:26:44.495 22:10:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:26:44.495 22:10:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:26:44.495 22:10:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:26:44.495 22:10:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:26:44.495 22:10:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:26:44.495 22:10:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:26:44.495 22:10:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:44.495 [global] 00:26:44.495 thread=1 00:26:44.495 invalidate=1 00:26:44.495 rw=read 00:26:44.495 time_based=1 00:26:44.495 runtime=10 00:26:44.495 ioengine=libaio 00:26:44.495 direct=1 00:26:44.495 bs=262144 00:26:44.495 iodepth=64 00:26:44.495 norandommap=1 00:26:44.495 numjobs=1 00:26:44.495 00:26:44.495 [job0] 00:26:44.495 filename=/dev/nvme0n1 00:26:44.495 [job1] 00:26:44.495 filename=/dev/nvme10n1 00:26:44.495 [job2] 00:26:44.495 filename=/dev/nvme1n1 00:26:44.495 [job3] 00:26:44.495 filename=/dev/nvme2n1 00:26:44.495 [job4] 00:26:44.495 filename=/dev/nvme3n1 00:26:44.495 [job5] 00:26:44.495 filename=/dev/nvme4n1 00:26:44.495 [job6] 00:26:44.495 filename=/dev/nvme5n1 00:26:44.495 [job7] 00:26:44.495 filename=/dev/nvme6n1 00:26:44.495 [job8] 00:26:44.495 filename=/dev/nvme7n1 00:26:44.495 [job9] 00:26:44.495 filename=/dev/nvme8n1 00:26:44.495 [job10] 00:26:44.495 filename=/dev/nvme9n1 00:26:44.495 Could not set queue depth (nvme0n1) 00:26:44.495 Could not set queue depth (nvme10n1) 00:26:44.495 Could not set queue depth (nvme1n1) 00:26:44.495 Could not set queue depth (nvme2n1) 00:26:44.495 Could not set queue depth (nvme3n1) 00:26:44.495 Could not set queue depth (nvme4n1) 00:26:44.495 Could not set queue depth (nvme5n1) 00:26:44.495 Could not set queue depth (nvme6n1) 00:26:44.496 Could not set queue depth (nvme7n1) 00:26:44.496 Could not set queue depth (nvme8n1) 00:26:44.496 Could not set queue depth (nvme9n1) 00:26:44.754 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:44.754 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:44.754 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:44.754 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:44.754 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:44.754 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:44.754 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:44.754 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:44.754 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:44.754 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:44.754 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:44.754 fio-3.35 00:26:44.754 Starting 11 threads 00:26:57.008 00:26:57.008 job0: (groupid=0, jobs=1): err= 0: pid=4140906: Sat Jul 13 22:10:14 2024 00:26:57.008 read: IOPS=382, BW=95.6MiB/s (100MB/s)(970MiB/10144msec) 00:26:57.008 slat (usec): min=14, max=300313, avg=2290.52, stdev=8849.73 00:26:57.008 clat (msec): min=58, max=497, avg=164.84, stdev=71.34 00:26:57.008 lat (msec): min=58, max=687, avg=167.13, stdev=72.24 00:26:57.008 clat percentiles (msec): 00:26:57.008 | 1.00th=[ 74], 5.00th=[ 90], 10.00th=[ 96], 20.00th=[ 106], 00:26:57.008 | 30.00th=[ 116], 40.00th=[ 130], 50.00th=[ 155], 60.00th=[ 174], 00:26:57.008 | 70.00th=[ 190], 80.00th=[ 207], 90.00th=[ 236], 95.00th=[ 313], 00:26:57.008 | 99.00th=[ 405], 99.50th=[ 418], 99.90th=[ 460], 99.95th=[ 460], 00:26:57.008 | 99.99th=[ 498] 00:26:57.008 bw ( KiB/s): min=32256, max=158720, per=6.38%, avg=97692.85, stdev=36842.10, samples=20 00:26:57.008 iops : min= 126, max= 620, avg=381.60, stdev=143.94, samples=20 00:26:57.008 lat (msec) : 100=13.66%, 250=78.01%, 500=8.33% 00:26:57.008 cpu : usr=0.30%, sys=1.49%, ctx=912, majf=0, minf=4097 00:26:57.008 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:57.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.008 issued rwts: total=3879,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.008 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.008 job1: (groupid=0, jobs=1): err= 0: pid=4140907: Sat Jul 13 22:10:14 2024 00:26:57.008 read: IOPS=420, BW=105MiB/s (110MB/s)(1067MiB/10152msec) 00:26:57.008 slat (usec): min=9, max=218809, avg=1638.91, stdev=7559.77 00:26:57.008 clat (msec): min=2, max=590, avg=150.41, stdev=81.20 00:26:57.008 lat (msec): min=2, max=611, avg=152.05, stdev=82.53 00:26:57.008 clat percentiles (msec): 00:26:57.008 | 1.00th=[ 11], 5.00th=[ 28], 10.00th=[ 42], 20.00th=[ 80], 00:26:57.008 | 30.00th=[ 110], 40.00th=[ 127], 50.00th=[ 150], 60.00th=[ 171], 00:26:57.008 | 70.00th=[ 194], 80.00th=[ 207], 90.00th=[ 228], 95.00th=[ 305], 00:26:57.008 | 99.00th=[ 405], 99.50th=[ 422], 99.90th=[ 439], 99.95th=[ 451], 00:26:57.008 | 99.99th=[ 592] 00:26:57.008 bw ( KiB/s): min=43008, max=188416, per=7.03%, avg=107673.60, stdev=43398.92, samples=20 00:26:57.008 iops : min= 168, max= 736, avg=420.60, stdev=169.53, samples=20 00:26:57.008 lat (msec) : 4=0.02%, 10=0.89%, 20=2.53%, 50=10.31%, 100=11.62% 00:26:57.008 lat (msec) : 250=68.07%, 500=6.51%, 750=0.05% 00:26:57.008 cpu : usr=0.28%, sys=1.32%, ctx=1164, majf=0, minf=4097 00:26:57.008 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:26:57.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.008 issued rwts: total=4269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.008 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.008 job2: (groupid=0, jobs=1): err= 0: pid=4140908: Sat Jul 13 22:10:14 2024 00:26:57.008 read: IOPS=637, BW=159MiB/s (167MB/s)(1618MiB/10143msec) 00:26:57.008 slat (usec): min=11, max=246279, avg=1445.43, stdev=6840.14 00:26:57.008 clat (msec): min=7, max=611, avg=98.78, stdev=73.88 00:26:57.008 lat (msec): min=7, max=611, avg=100.23, stdev=75.10 00:26:57.008 clat percentiles (msec): 00:26:57.008 | 1.00th=[ 36], 5.00th=[ 40], 10.00th=[ 42], 20.00th=[ 44], 00:26:57.008 | 30.00th=[ 48], 40.00th=[ 65], 50.00th=[ 75], 60.00th=[ 87], 00:26:57.008 | 70.00th=[ 106], 80.00th=[ 129], 90.00th=[ 207], 95.00th=[ 236], 00:26:57.008 | 99.00th=[ 397], 99.50th=[ 422], 99.90th=[ 435], 99.95th=[ 550], 00:26:57.008 | 99.99th=[ 609] 00:26:57.008 bw ( KiB/s): min=30781, max=382464, per=10.71%, avg=164047.85, stdev=100396.45, samples=20 00:26:57.008 iops : min= 120, max= 1494, avg=640.80, stdev=392.19, samples=20 00:26:57.008 lat (msec) : 10=0.05%, 20=0.29%, 50=31.17%, 100=35.68%, 250=28.77% 00:26:57.008 lat (msec) : 500=3.96%, 750=0.08% 00:26:57.008 cpu : usr=0.43%, sys=2.15%, ctx=1322, majf=0, minf=4097 00:26:57.008 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:26:57.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.008 issued rwts: total=6471,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.008 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.008 job3: (groupid=0, jobs=1): err= 0: pid=4140909: Sat Jul 13 22:10:14 2024 00:26:57.008 read: IOPS=585, BW=146MiB/s (154MB/s)(1477MiB/10085msec) 00:26:57.008 slat (usec): min=9, max=145650, avg=931.83, stdev=4137.39 00:26:57.008 clat (usec): min=1700, max=461771, avg=108221.77, stdev=67566.15 00:26:57.008 lat (usec): min=1728, max=461800, avg=109153.59, stdev=67785.54 00:26:57.008 clat percentiles (msec): 00:26:57.008 | 1.00th=[ 14], 5.00th=[ 30], 10.00th=[ 55], 20.00th=[ 70], 00:26:57.008 | 30.00th=[ 80], 40.00th=[ 86], 50.00th=[ 95], 60.00th=[ 102], 00:26:57.008 | 70.00th=[ 112], 80.00th=[ 129], 90.00th=[ 186], 95.00th=[ 247], 00:26:57.008 | 99.00th=[ 409], 99.50th=[ 439], 99.90th=[ 451], 99.95th=[ 456], 00:26:57.008 | 99.99th=[ 464] 00:26:57.008 bw ( KiB/s): min=39424, max=212992, per=9.77%, avg=149632.00, stdev=45542.07, samples=20 00:26:57.008 iops : min= 154, max= 832, avg=584.50, stdev=177.90, samples=20 00:26:57.008 lat (msec) : 2=0.02%, 4=0.12%, 10=0.46%, 20=2.39%, 50=5.92% 00:26:57.008 lat (msec) : 100=49.34%, 250=37.17%, 500=4.59% 00:26:57.008 cpu : usr=0.38%, sys=2.05%, ctx=1475, majf=0, minf=4097 00:26:57.008 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:26:57.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.008 issued rwts: total=5908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.008 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.008 job4: (groupid=0, jobs=1): err= 0: pid=4140910: Sat Jul 13 22:10:14 2024 00:26:57.008 read: IOPS=444, BW=111MiB/s (117MB/s)(1129MiB/10153msec) 00:26:57.008 slat (usec): min=12, max=60471, avg=2122.12, stdev=5653.28 00:26:57.008 clat (msec): min=26, max=328, avg=141.66, stdev=52.13 00:26:57.008 lat (msec): min=27, max=328, avg=143.78, stdev=52.89 00:26:57.008 clat percentiles (msec): 00:26:57.008 | 1.00th=[ 47], 5.00th=[ 63], 10.00th=[ 75], 20.00th=[ 90], 00:26:57.008 | 30.00th=[ 102], 40.00th=[ 120], 50.00th=[ 150], 60.00th=[ 163], 00:26:57.008 | 70.00th=[ 174], 80.00th=[ 194], 90.00th=[ 209], 95.00th=[ 222], 00:26:57.008 | 99.00th=[ 243], 99.50th=[ 268], 99.90th=[ 313], 99.95th=[ 321], 00:26:57.008 | 99.99th=[ 330] 00:26:57.008 bw ( KiB/s): min=73216, max=206336, per=7.44%, avg=113965.55, stdev=39956.54, samples=20 00:26:57.008 iops : min= 286, max= 806, avg=445.15, stdev=156.02, samples=20 00:26:57.008 lat (msec) : 50=1.33%, 100=27.38%, 250=70.70%, 500=0.60% 00:26:57.008 cpu : usr=0.22%, sys=1.80%, ctx=999, majf=0, minf=4097 00:26:57.008 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:57.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.009 issued rwts: total=4515,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.009 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.009 job5: (groupid=0, jobs=1): err= 0: pid=4140911: Sat Jul 13 22:10:14 2024 00:26:57.009 read: IOPS=461, BW=115MiB/s (121MB/s)(1171MiB/10156msec) 00:26:57.009 slat (usec): min=9, max=133702, avg=1360.19, stdev=6039.68 00:26:57.009 clat (msec): min=3, max=339, avg=137.24, stdev=69.00 00:26:57.009 lat (msec): min=3, max=358, avg=138.60, stdev=69.80 00:26:57.009 clat percentiles (msec): 00:26:57.009 | 1.00th=[ 10], 5.00th=[ 21], 10.00th=[ 38], 20.00th=[ 85], 00:26:57.009 | 30.00th=[ 102], 40.00th=[ 114], 50.00th=[ 129], 60.00th=[ 157], 00:26:57.009 | 70.00th=[ 182], 80.00th=[ 197], 90.00th=[ 220], 95.00th=[ 247], 00:26:57.009 | 99.00th=[ 317], 99.50th=[ 321], 99.90th=[ 334], 99.95th=[ 338], 00:26:57.009 | 99.99th=[ 342] 00:26:57.009 bw ( KiB/s): min=59904, max=221696, per=7.72%, avg=118323.20, stdev=38064.17, samples=20 00:26:57.009 iops : min= 234, max= 866, avg=462.20, stdev=148.69, samples=20 00:26:57.009 lat (msec) : 4=0.02%, 10=1.02%, 20=3.95%, 50=7.15%, 100=16.93% 00:26:57.009 lat (msec) : 250=66.04%, 500=4.89% 00:26:57.009 cpu : usr=0.30%, sys=1.32%, ctx=1282, majf=0, minf=4097 00:26:57.009 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:57.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.009 issued rwts: total=4685,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.009 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.009 job6: (groupid=0, jobs=1): err= 0: pid=4140912: Sat Jul 13 22:10:14 2024 00:26:57.009 read: IOPS=546, BW=137MiB/s (143MB/s)(1378MiB/10088msec) 00:26:57.009 slat (usec): min=9, max=167189, avg=1560.22, stdev=6537.37 00:26:57.009 clat (usec): min=1136, max=558023, avg=115465.64, stdev=77694.25 00:26:57.009 lat (usec): min=1155, max=558102, avg=117025.85, stdev=78828.89 00:26:57.009 clat percentiles (usec): 00:26:57.009 | 1.00th=[ 1762], 5.00th=[ 11469], 10.00th=[ 20055], 20.00th=[ 64226], 00:26:57.009 | 30.00th=[ 82314], 40.00th=[ 94897], 50.00th=[102237], 60.00th=[114820], 00:26:57.009 | 70.00th=[130548], 80.00th=[154141], 90.00th=[219153], 95.00th=[254804], 00:26:57.009 | 99.00th=[379585], 99.50th=[396362], 99.90th=[421528], 99.95th=[541066], 00:26:57.009 | 99.99th=[557843] 00:26:57.009 bw ( KiB/s): min=40960, max=302592, per=9.11%, avg=139494.40, stdev=67586.55, samples=20 00:26:57.009 iops : min= 160, max= 1182, avg=544.90, stdev=264.01, samples=20 00:26:57.009 lat (msec) : 2=1.11%, 4=0.89%, 10=2.58%, 20=5.28%, 50=8.18% 00:26:57.009 lat (msec) : 100=28.36%, 250=48.37%, 500=5.19%, 750=0.05% 00:26:57.009 cpu : usr=0.33%, sys=1.91%, ctx=1242, majf=0, minf=4097 00:26:57.009 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:57.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.009 issued rwts: total=5512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.009 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.009 job7: (groupid=0, jobs=1): err= 0: pid=4140913: Sat Jul 13 22:10:14 2024 00:26:57.009 read: IOPS=506, BW=127MiB/s (133MB/s)(1289MiB/10188msec) 00:26:57.009 slat (usec): min=14, max=56414, avg=1816.83, stdev=5168.25 00:26:57.009 clat (msec): min=18, max=371, avg=124.48, stdev=62.04 00:26:57.009 lat (msec): min=18, max=371, avg=126.30, stdev=62.99 00:26:57.009 clat percentiles (msec): 00:26:57.009 | 1.00th=[ 43], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 57], 00:26:57.009 | 30.00th=[ 81], 40.00th=[ 102], 50.00th=[ 113], 60.00th=[ 138], 00:26:57.009 | 70.00th=[ 167], 80.00th=[ 188], 90.00th=[ 209], 95.00th=[ 222], 00:26:57.009 | 99.00th=[ 264], 99.50th=[ 284], 99.90th=[ 368], 99.95th=[ 368], 00:26:57.009 | 99.99th=[ 372] 00:26:57.009 bw ( KiB/s): min=72704, max=346112, per=8.51%, avg=130406.40, stdev=70465.96, samples=20 00:26:57.009 iops : min= 284, max= 1352, avg=509.40, stdev=275.26, samples=20 00:26:57.009 lat (msec) : 20=0.04%, 50=16.85%, 100=22.42%, 250=59.12%, 500=1.57% 00:26:57.009 cpu : usr=0.30%, sys=1.96%, ctx=1187, majf=0, minf=4097 00:26:57.009 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:57.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.009 issued rwts: total=5157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.009 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.009 job8: (groupid=0, jobs=1): err= 0: pid=4140914: Sat Jul 13 22:10:14 2024 00:26:57.009 read: IOPS=804, BW=201MiB/s (211MB/s)(2029MiB/10090msec) 00:26:57.009 slat (usec): min=11, max=113013, avg=975.37, stdev=3875.23 00:26:57.009 clat (msec): min=2, max=371, avg=78.53, stdev=50.55 00:26:57.009 lat (msec): min=2, max=371, avg=79.50, stdev=51.11 00:26:57.009 clat percentiles (msec): 00:26:57.009 | 1.00th=[ 10], 5.00th=[ 26], 10.00th=[ 35], 20.00th=[ 40], 00:26:57.009 | 30.00th=[ 46], 40.00th=[ 51], 50.00th=[ 61], 60.00th=[ 78], 00:26:57.009 | 70.00th=[ 95], 80.00th=[ 117], 90.00th=[ 148], 95.00th=[ 184], 00:26:57.009 | 99.00th=[ 251], 99.50th=[ 262], 99.90th=[ 305], 99.95th=[ 321], 00:26:57.009 | 99.99th=[ 372] 00:26:57.009 bw ( KiB/s): min=82944, max=390144, per=13.45%, avg=206105.60, stdev=90486.72, samples=20 00:26:57.009 iops : min= 324, max= 1524, avg=805.10, stdev=353.46, samples=20 00:26:57.009 lat (msec) : 4=0.15%, 10=1.04%, 20=2.76%, 50=34.89%, 100=34.18% 00:26:57.009 lat (msec) : 250=25.92%, 500=1.07% 00:26:57.009 cpu : usr=0.53%, sys=3.07%, ctx=1759, majf=0, minf=3721 00:26:57.009 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:57.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.009 issued rwts: total=8114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.009 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.009 job9: (groupid=0, jobs=1): err= 0: pid=4140915: Sat Jul 13 22:10:14 2024 00:26:57.009 read: IOPS=569, BW=142MiB/s (149MB/s)(1427MiB/10024msec) 00:26:57.009 slat (usec): min=8, max=177029, avg=863.79, stdev=5458.36 00:26:57.009 clat (usec): min=1156, max=469693, avg=111476.33, stdev=84054.49 00:26:57.009 lat (usec): min=1190, max=469726, avg=112340.12, stdev=84702.65 00:26:57.009 clat percentiles (msec): 00:26:57.009 | 1.00th=[ 5], 5.00th=[ 15], 10.00th=[ 28], 20.00th=[ 42], 00:26:57.009 | 30.00th=[ 55], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 105], 00:26:57.009 | 70.00th=[ 157], 80.00th=[ 174], 90.00th=[ 222], 95.00th=[ 275], 00:26:57.009 | 99.00th=[ 397], 99.50th=[ 409], 99.90th=[ 439], 99.95th=[ 468], 00:26:57.009 | 99.99th=[ 468] 00:26:57.009 bw ( KiB/s): min=43008, max=344064, per=9.43%, avg=144460.80, stdev=69476.14, samples=20 00:26:57.009 iops : min= 168, max= 1344, avg=564.30, stdev=271.39, samples=20 00:26:57.009 lat (msec) : 2=0.19%, 4=0.40%, 10=2.54%, 20=3.42%, 50=21.78% 00:26:57.009 lat (msec) : 100=30.30%, 250=35.63%, 500=5.73% 00:26:57.009 cpu : usr=0.29%, sys=1.79%, ctx=1615, majf=0, minf=4097 00:26:57.009 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:57.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.009 issued rwts: total=5706,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.009 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.009 job10: (groupid=0, jobs=1): err= 0: pid=4140916: Sat Jul 13 22:10:14 2024 00:26:57.009 read: IOPS=674, BW=169MiB/s (177MB/s)(1689MiB/10021msec) 00:26:57.009 slat (usec): min=12, max=72895, avg=1365.48, stdev=4177.47 00:26:57.009 clat (msec): min=8, max=371, avg=93.48, stdev=46.38 00:26:57.009 lat (msec): min=8, max=371, avg=94.85, stdev=47.11 00:26:57.009 clat percentiles (msec): 00:26:57.009 | 1.00th=[ 17], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 59], 00:26:57.009 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 94], 00:26:57.009 | 70.00th=[ 108], 80.00th=[ 130], 90.00th=[ 163], 95.00th=[ 174], 00:26:57.009 | 99.00th=[ 257], 99.50th=[ 292], 99.90th=[ 313], 99.95th=[ 317], 00:26:57.009 | 99.99th=[ 372] 00:26:57.009 bw ( KiB/s): min=82944, max=293376, per=11.18%, avg=171315.20, stdev=58096.57, samples=20 00:26:57.009 iops : min= 324, max= 1146, avg=669.20, stdev=226.94, samples=20 00:26:57.009 lat (msec) : 10=0.04%, 20=1.67%, 50=9.83%, 100=53.09%, 250=34.33% 00:26:57.009 lat (msec) : 500=1.04% 00:26:57.009 cpu : usr=0.49%, sys=2.42%, ctx=1403, majf=0, minf=4097 00:26:57.009 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:26:57.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:57.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:57.009 issued rwts: total=6755,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:57.009 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:57.009 00:26:57.009 Run status group 0 (all jobs): 00:26:57.009 READ: bw=1496MiB/s (1569MB/s), 95.6MiB/s-201MiB/s (100MB/s-211MB/s), io=14.9GiB (16.0GB), run=10021-10188msec 00:26:57.009 00:26:57.009 Disk stats (read/write): 00:26:57.009 nvme0n1: ios=7599/0, merge=0/0, ticks=1231428/0, in_queue=1231428, util=97.18% 00:26:57.009 nvme10n1: ios=8354/0, merge=0/0, ticks=1229035/0, in_queue=1229035, util=97.41% 00:26:57.009 nvme1n1: ios=12780/0, merge=0/0, ticks=1229512/0, in_queue=1229512, util=97.67% 00:26:57.009 nvme2n1: ios=11626/0, merge=0/0, ticks=1246247/0, in_queue=1246247, util=97.81% 00:26:57.009 nvme3n1: ios=8840/0, merge=0/0, ticks=1224430/0, in_queue=1224430, util=97.88% 00:26:57.009 nvme4n1: ios=9195/0, merge=0/0, ticks=1235559/0, in_queue=1235559, util=98.23% 00:26:57.009 nvme5n1: ios=10820/0, merge=0/0, ticks=1236252/0, in_queue=1236252, util=98.39% 00:26:57.009 nvme6n1: ios=10312/0, merge=0/0, ticks=1258414/0, in_queue=1258414, util=98.52% 00:26:57.009 nvme7n1: ios=16019/0, merge=0/0, ticks=1237087/0, in_queue=1237087, util=98.91% 00:26:57.009 nvme8n1: ios=11172/0, merge=0/0, ticks=1245032/0, in_queue=1245032, util=99.09% 00:26:57.009 nvme9n1: ios=13256/0, merge=0/0, ticks=1236477/0, in_queue=1236477, util=99.22% 00:26:57.009 22:10:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:57.009 [global] 00:26:57.009 thread=1 00:26:57.009 invalidate=1 00:26:57.009 rw=randwrite 00:26:57.009 time_based=1 00:26:57.009 runtime=10 00:26:57.009 ioengine=libaio 00:26:57.009 direct=1 00:26:57.009 bs=262144 00:26:57.009 iodepth=64 00:26:57.009 norandommap=1 00:26:57.009 numjobs=1 00:26:57.009 00:26:57.009 [job0] 00:26:57.009 filename=/dev/nvme0n1 00:26:57.009 [job1] 00:26:57.009 filename=/dev/nvme10n1 00:26:57.009 [job2] 00:26:57.009 filename=/dev/nvme1n1 00:26:57.009 [job3] 00:26:57.009 filename=/dev/nvme2n1 00:26:57.009 [job4] 00:26:57.009 filename=/dev/nvme3n1 00:26:57.009 [job5] 00:26:57.009 filename=/dev/nvme4n1 00:26:57.009 [job6] 00:26:57.009 filename=/dev/nvme5n1 00:26:57.009 [job7] 00:26:57.009 filename=/dev/nvme6n1 00:26:57.009 [job8] 00:26:57.010 filename=/dev/nvme7n1 00:26:57.010 [job9] 00:26:57.010 filename=/dev/nvme8n1 00:26:57.010 [job10] 00:26:57.010 filename=/dev/nvme9n1 00:26:57.010 Could not set queue depth (nvme0n1) 00:26:57.010 Could not set queue depth (nvme10n1) 00:26:57.010 Could not set queue depth (nvme1n1) 00:26:57.010 Could not set queue depth (nvme2n1) 00:26:57.010 Could not set queue depth (nvme3n1) 00:26:57.010 Could not set queue depth (nvme4n1) 00:26:57.010 Could not set queue depth (nvme5n1) 00:26:57.010 Could not set queue depth (nvme6n1) 00:26:57.010 Could not set queue depth (nvme7n1) 00:26:57.010 Could not set queue depth (nvme8n1) 00:26:57.010 Could not set queue depth (nvme9n1) 00:26:57.010 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:57.010 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:57.010 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:57.010 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:57.010 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:57.010 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:57.010 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:57.010 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:57.010 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:57.010 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:57.010 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:57.010 fio-3.35 00:26:57.010 Starting 11 threads 00:27:06.988 00:27:06.988 job0: (groupid=0, jobs=1): err= 0: pid=4141934: Sat Jul 13 22:10:25 2024 00:27:06.988 write: IOPS=364, BW=91.2MiB/s (95.7MB/s)(942MiB/10327msec); 0 zone resets 00:27:06.988 slat (usec): min=24, max=244005, avg=2282.08, stdev=7914.41 00:27:06.988 clat (msec): min=3, max=1143, avg=172.87, stdev=155.11 00:27:06.988 lat (msec): min=3, max=1163, avg=175.16, stdev=156.75 00:27:06.989 clat percentiles (msec): 00:27:06.989 | 1.00th=[ 19], 5.00th=[ 46], 10.00th=[ 58], 20.00th=[ 61], 00:27:06.989 | 30.00th=[ 64], 40.00th=[ 112], 50.00th=[ 118], 60.00th=[ 138], 00:27:06.989 | 70.00th=[ 190], 80.00th=[ 271], 90.00th=[ 334], 95.00th=[ 493], 00:27:06.989 | 99.00th=[ 818], 99.50th=[ 1062], 99.90th=[ 1150], 99.95th=[ 1150], 00:27:06.989 | 99.99th=[ 1150] 00:27:06.989 bw ( KiB/s): min=15840, max=268288, per=10.59%, avg=94855.75, stdev=66435.05, samples=20 00:27:06.989 iops : min= 61, max= 1048, avg=370.45, stdev=259.57, samples=20 00:27:06.989 lat (msec) : 4=0.03%, 10=0.27%, 20=0.90%, 50=5.78%, 100=27.54% 00:27:06.989 lat (msec) : 250=42.40%, 500=18.31%, 750=3.21%, 1000=1.03%, 2000=0.53% 00:27:06.989 cpu : usr=1.22%, sys=1.27%, ctx=1537, majf=0, minf=1 00:27:06.989 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:27:06.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:06.989 issued rwts: total=0,3769,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.989 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:06.989 job1: (groupid=0, jobs=1): err= 0: pid=4141935: Sat Jul 13 22:10:25 2024 00:27:06.989 write: IOPS=157, BW=39.4MiB/s (41.3MB/s)(407MiB/10322msec); 0 zone resets 00:27:06.989 slat (usec): min=26, max=151774, avg=6139.05, stdev=15199.32 00:27:06.989 clat (msec): min=27, max=1125, avg=399.17, stdev=198.49 00:27:06.989 lat (msec): min=28, max=1125, avg=405.31, stdev=200.48 00:27:06.989 clat percentiles (msec): 00:27:06.989 | 1.00th=[ 57], 5.00th=[ 127], 10.00th=[ 186], 20.00th=[ 220], 00:27:06.989 | 30.00th=[ 288], 40.00th=[ 338], 50.00th=[ 363], 60.00th=[ 414], 00:27:06.989 | 70.00th=[ 485], 80.00th=[ 542], 90.00th=[ 659], 95.00th=[ 776], 00:27:06.989 | 99.00th=[ 1036], 99.50th=[ 1083], 99.90th=[ 1133], 99.95th=[ 1133], 00:27:06.989 | 99.99th=[ 1133] 00:27:06.989 bw ( KiB/s): min= 8686, max=73728, per=4.47%, avg=40052.60, stdev=17219.40, samples=20 00:27:06.989 iops : min= 33, max= 288, avg=156.35, stdev=67.32, samples=20 00:27:06.989 lat (msec) : 50=0.74%, 100=2.15%, 250=22.11%, 500=46.56%, 750=22.05% 00:27:06.989 lat (msec) : 1000=5.04%, 2000=1.35% 00:27:06.989 cpu : usr=0.51%, sys=0.46%, ctx=425, majf=0, minf=1 00:27:06.989 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:27:06.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.989 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:06.989 issued rwts: total=0,1628,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.989 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:06.989 job2: (groupid=0, jobs=1): err= 0: pid=4141936: Sat Jul 13 22:10:25 2024 00:27:06.989 write: IOPS=203, BW=50.9MiB/s (53.4MB/s)(526MiB/10322msec); 0 zone resets 00:27:06.989 slat (usec): min=26, max=145439, avg=4093.92, stdev=10867.38 00:27:06.989 clat (msec): min=20, max=1180, avg=309.82, stdev=181.11 00:27:06.989 lat (msec): min=20, max=1180, avg=313.92, stdev=183.51 00:27:06.989 clat percentiles (msec): 00:27:06.989 | 1.00th=[ 48], 5.00th=[ 106], 10.00th=[ 136], 20.00th=[ 197], 00:27:06.989 | 30.00th=[ 224], 40.00th=[ 230], 50.00th=[ 243], 60.00th=[ 262], 00:27:06.989 | 70.00th=[ 338], 80.00th=[ 451], 90.00th=[ 575], 95.00th=[ 617], 00:27:06.989 | 99.00th=[ 1011], 99.50th=[ 1053], 99.90th=[ 1116], 99.95th=[ 1183], 00:27:06.989 | 99.99th=[ 1183] 00:27:06.989 bw ( KiB/s): min= 8175, max=93508, per=5.82%, avg=52158.45, stdev=24250.79, samples=20 00:27:06.989 iops : min= 31, max= 365, avg=203.65, stdev=94.81, samples=20 00:27:06.989 lat (msec) : 50=1.19%, 100=3.33%, 250=50.24%, 500=28.64%, 750=14.03% 00:27:06.989 lat (msec) : 1000=1.52%, 2000=1.05% 00:27:06.989 cpu : usr=0.63%, sys=0.75%, ctx=925, majf=0, minf=1 00:27:06.989 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:27:06.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:06.989 issued rwts: total=0,2102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.989 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:06.989 job3: (groupid=0, jobs=1): err= 0: pid=4141948: Sat Jul 13 22:10:25 2024 00:27:06.989 write: IOPS=409, BW=102MiB/s (107MB/s)(1037MiB/10133msec); 0 zone resets 00:27:06.989 slat (usec): min=25, max=55289, avg=2229.86, stdev=4871.37 00:27:06.989 clat (msec): min=3, max=336, avg=154.08, stdev=70.23 00:27:06.989 lat (msec): min=3, max=336, avg=156.31, stdev=71.18 00:27:06.989 clat percentiles (msec): 00:27:06.989 | 1.00th=[ 19], 5.00th=[ 61], 10.00th=[ 95], 20.00th=[ 102], 00:27:06.989 | 30.00th=[ 106], 40.00th=[ 109], 50.00th=[ 118], 60.00th=[ 159], 00:27:06.989 | 70.00th=[ 194], 80.00th=[ 224], 90.00th=[ 262], 95.00th=[ 284], 00:27:06.989 | 99.00th=[ 313], 99.50th=[ 321], 99.90th=[ 334], 99.95th=[ 338], 00:27:06.989 | 99.99th=[ 338] 00:27:06.989 bw ( KiB/s): min=51200, max=165376, per=11.67%, avg=104516.35, stdev=40342.37, samples=20 00:27:06.989 iops : min= 200, max= 646, avg=408.25, stdev=157.60, samples=20 00:27:06.989 lat (msec) : 4=0.02%, 10=0.27%, 20=0.92%, 50=2.97%, 100=10.76% 00:27:06.989 lat (msec) : 250=71.97%, 500=13.10% 00:27:06.989 cpu : usr=1.19%, sys=1.40%, ctx=1423, majf=0, minf=1 00:27:06.989 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:27:06.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:06.989 issued rwts: total=0,4146,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.989 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:06.989 job4: (groupid=0, jobs=1): err= 0: pid=4141949: Sat Jul 13 22:10:25 2024 00:27:06.989 write: IOPS=485, BW=121MiB/s (127MB/s)(1222MiB/10065msec); 0 zone resets 00:27:06.989 slat (usec): min=18, max=157524, avg=1692.59, stdev=6121.00 00:27:06.989 clat (msec): min=2, max=826, avg=130.04, stdev=137.62 00:27:06.989 lat (msec): min=2, max=839, avg=131.73, stdev=139.34 00:27:06.989 clat percentiles (msec): 00:27:06.989 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 32], 20.00th=[ 62], 00:27:06.989 | 30.00th=[ 69], 40.00th=[ 73], 50.00th=[ 85], 60.00th=[ 106], 00:27:06.989 | 70.00th=[ 136], 80.00th=[ 148], 90.00th=[ 284], 95.00th=[ 363], 00:27:06.989 | 99.00th=[ 785], 99.50th=[ 802], 99.90th=[ 810], 99.95th=[ 818], 00:27:06.989 | 99.99th=[ 827] 00:27:06.989 bw ( KiB/s): min=22528, max=261120, per=13.78%, avg=123458.95, stdev=75540.49, samples=20 00:27:06.989 iops : min= 88, max= 1020, avg=482.20, stdev=295.08, samples=20 00:27:06.989 lat (msec) : 4=0.20%, 10=2.46%, 20=3.77%, 50=10.40%, 100=40.28% 00:27:06.989 lat (msec) : 250=31.38%, 500=7.43%, 750=2.54%, 1000=1.56% 00:27:06.989 cpu : usr=1.37%, sys=1.60%, ctx=2350, majf=0, minf=1 00:27:06.989 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:06.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:06.989 issued rwts: total=0,4886,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.989 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:06.989 job5: (groupid=0, jobs=1): err= 0: pid=4141950: Sat Jul 13 22:10:25 2024 00:27:06.989 write: IOPS=268, BW=67.2MiB/s (70.5MB/s)(680MiB/10108msec); 0 zone resets 00:27:06.989 slat (usec): min=23, max=89938, avg=2743.55, stdev=7344.44 00:27:06.989 clat (msec): min=3, max=763, avg=234.98, stdev=148.17 00:27:06.989 lat (msec): min=4, max=773, avg=237.73, stdev=149.73 00:27:06.989 clat percentiles (msec): 00:27:06.989 | 1.00th=[ 11], 5.00th=[ 22], 10.00th=[ 48], 20.00th=[ 105], 00:27:06.989 | 30.00th=[ 167], 40.00th=[ 192], 50.00th=[ 224], 60.00th=[ 243], 00:27:06.989 | 70.00th=[ 271], 80.00th=[ 317], 90.00th=[ 468], 95.00th=[ 510], 00:27:06.989 | 99.00th=[ 709], 99.50th=[ 735], 99.90th=[ 751], 99.95th=[ 760], 00:27:06.989 | 99.99th=[ 768] 00:27:06.989 bw ( KiB/s): min=29696, max=174592, per=7.59%, avg=67979.30, stdev=37777.71, samples=20 00:27:06.989 iops : min= 116, max= 682, avg=265.50, stdev=147.58, samples=20 00:27:06.989 lat (msec) : 4=0.04%, 10=0.88%, 20=3.68%, 50=5.77%, 100=8.53% 00:27:06.989 lat (msec) : 250=45.27%, 500=29.31%, 750=6.36%, 1000=0.15% 00:27:06.989 cpu : usr=0.79%, sys=0.72%, ctx=1352, majf=0, minf=1 00:27:06.989 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:27:06.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:06.989 issued rwts: total=0,2719,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.989 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:06.989 job6: (groupid=0, jobs=1): err= 0: pid=4141951: Sat Jul 13 22:10:25 2024 00:27:06.989 write: IOPS=443, BW=111MiB/s (116MB/s)(1123MiB/10134msec); 0 zone resets 00:27:06.989 slat (usec): min=22, max=56009, avg=1794.07, stdev=4302.29 00:27:06.989 clat (msec): min=2, max=374, avg=142.50, stdev=76.94 00:27:06.989 lat (msec): min=3, max=374, avg=144.29, stdev=77.76 00:27:06.989 clat percentiles (msec): 00:27:06.989 | 1.00th=[ 14], 5.00th=[ 34], 10.00th=[ 52], 20.00th=[ 96], 00:27:06.989 | 30.00th=[ 109], 40.00th=[ 113], 50.00th=[ 117], 60.00th=[ 129], 00:27:06.989 | 70.00th=[ 174], 80.00th=[ 213], 90.00th=[ 262], 95.00th=[ 300], 00:27:06.989 | 99.00th=[ 326], 99.50th=[ 338], 99.90th=[ 363], 99.95th=[ 372], 00:27:06.989 | 99.99th=[ 376] 00:27:06.989 bw ( KiB/s): min=53248, max=213504, per=12.65%, avg=113356.45, stdev=41733.48, samples=20 00:27:06.989 iops : min= 208, max= 834, avg=442.75, stdev=163.06, samples=20 00:27:06.989 lat (msec) : 4=0.04%, 10=0.53%, 20=1.67%, 50=7.50%, 100=10.60% 00:27:06.989 lat (msec) : 250=69.03%, 500=10.62% 00:27:06.989 cpu : usr=1.35%, sys=1.41%, ctx=2008, majf=0, minf=1 00:27:06.989 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:06.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.989 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:06.989 issued rwts: total=0,4491,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.989 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:06.989 job7: (groupid=0, jobs=1): err= 0: pid=4141952: Sat Jul 13 22:10:25 2024 00:27:06.989 write: IOPS=513, BW=128MiB/s (135MB/s)(1293MiB/10062msec); 0 zone resets 00:27:06.989 slat (usec): min=17, max=153942, avg=1513.49, stdev=4342.36 00:27:06.989 clat (usec): min=1746, max=645575, avg=122971.85, stdev=91315.60 00:27:06.989 lat (usec): min=1789, max=645614, avg=124485.34, stdev=92291.42 00:27:06.989 clat percentiles (msec): 00:27:06.989 | 1.00th=[ 8], 5.00th=[ 29], 10.00th=[ 53], 20.00th=[ 59], 00:27:06.989 | 30.00th=[ 63], 40.00th=[ 78], 50.00th=[ 102], 60.00th=[ 108], 00:27:06.989 | 70.00th=[ 144], 80.00th=[ 192], 90.00th=[ 226], 95.00th=[ 253], 00:27:06.989 | 99.00th=[ 542], 99.50th=[ 558], 99.90th=[ 617], 99.95th=[ 642], 00:27:06.989 | 99.99th=[ 642] 00:27:06.989 bw ( KiB/s): min=28672, max=283081, per=14.59%, avg=130748.60, stdev=67284.20, samples=20 00:27:06.989 iops : min= 112, max= 1105, avg=510.65, stdev=262.74, samples=20 00:27:06.989 lat (msec) : 2=0.02%, 4=0.21%, 10=1.30%, 20=1.80%, 50=4.87% 00:27:06.989 lat (msec) : 100=40.03%, 250=46.59%, 500=3.38%, 750=1.80% 00:27:06.989 cpu : usr=1.37%, sys=1.49%, ctx=2288, majf=0, minf=1 00:27:06.989 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:06.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.990 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:06.990 issued rwts: total=0,5171,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.990 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:06.990 job8: (groupid=0, jobs=1): err= 0: pid=4141959: Sat Jul 13 22:10:25 2024 00:27:06.990 write: IOPS=218, BW=54.5MiB/s (57.2MB/s)(563MiB/10322msec); 0 zone resets 00:27:06.990 slat (usec): min=17, max=152346, avg=3807.87, stdev=12138.62 00:27:06.990 clat (usec): min=1635, max=1234.6k, avg=289341.66, stdev=233992.41 00:27:06.990 lat (usec): min=1680, max=1234.6k, avg=293149.53, stdev=237029.56 00:27:06.990 clat percentiles (msec): 00:27:06.990 | 1.00th=[ 5], 5.00th=[ 19], 10.00th=[ 33], 20.00th=[ 99], 00:27:06.990 | 30.00th=[ 178], 40.00th=[ 194], 50.00th=[ 215], 60.00th=[ 245], 00:27:06.990 | 70.00th=[ 292], 80.00th=[ 527], 90.00th=[ 617], 95.00th=[ 709], 00:27:06.990 | 99.00th=[ 1116], 99.50th=[ 1167], 99.90th=[ 1217], 99.95th=[ 1234], 00:27:06.990 | 99.99th=[ 1234] 00:27:06.990 bw ( KiB/s): min= 8175, max=130299, per=6.25%, avg=55998.90, stdev=34928.92, samples=20 00:27:06.990 iops : min= 31, max= 508, avg=218.65, stdev=136.40, samples=20 00:27:06.990 lat (msec) : 2=0.13%, 4=0.62%, 10=2.13%, 20=2.40%, 50=9.06% 00:27:06.990 lat (msec) : 100=5.99%, 250=41.34%, 500=16.39%, 750=17.90%, 1000=2.53% 00:27:06.990 lat (msec) : 2000=1.51% 00:27:06.990 cpu : usr=0.60%, sys=0.62%, ctx=1117, majf=0, minf=1 00:27:06.990 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:27:06.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.990 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:06.990 issued rwts: total=0,2252,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.990 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:06.990 job9: (groupid=0, jobs=1): err= 0: pid=4141960: Sat Jul 13 22:10:25 2024 00:27:06.990 write: IOPS=195, BW=48.9MiB/s (51.3MB/s)(505MiB/10325msec); 0 zone resets 00:27:06.990 slat (usec): min=18, max=266653, avg=4162.18, stdev=12778.20 00:27:06.990 clat (usec): min=1653, max=1154.0k, avg=322438.72, stdev=216503.15 00:27:06.990 lat (usec): min=1679, max=1154.1k, avg=326600.90, stdev=219037.68 00:27:06.990 clat percentiles (msec): 00:27:06.990 | 1.00th=[ 7], 5.00th=[ 13], 10.00th=[ 21], 20.00th=[ 178], 00:27:06.990 | 30.00th=[ 199], 40.00th=[ 230], 50.00th=[ 305], 60.00th=[ 351], 00:27:06.990 | 70.00th=[ 409], 80.00th=[ 481], 90.00th=[ 600], 95.00th=[ 693], 00:27:06.990 | 99.00th=[ 1053], 99.50th=[ 1099], 99.90th=[ 1150], 99.95th=[ 1150], 00:27:06.990 | 99.99th=[ 1150] 00:27:06.990 bw ( KiB/s): min= 6131, max=88240, per=5.59%, avg=50107.35, stdev=23313.54, samples=20 00:27:06.990 iops : min= 23, max= 344, avg=195.65, stdev=91.10, samples=20 00:27:06.990 lat (msec) : 2=0.10%, 4=0.30%, 10=3.27%, 20=6.09%, 50=4.26% 00:27:06.990 lat (msec) : 100=1.09%, 250=27.81%, 500=40.03%, 750=14.15%, 1000=1.43% 00:27:06.990 lat (msec) : 2000=1.48% 00:27:06.990 cpu : usr=0.53%, sys=0.59%, ctx=1014, majf=0, minf=1 00:27:06.990 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:27:06.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.990 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:06.990 issued rwts: total=0,2021,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.990 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:06.990 job10: (groupid=0, jobs=1): err= 0: pid=4141961: Sat Jul 13 22:10:25 2024 00:27:06.990 write: IOPS=286, BW=71.5MiB/s (75.0MB/s)(739MiB/10325msec); 0 zone resets 00:27:06.990 slat (usec): min=14, max=159523, avg=3006.85, stdev=9483.21 00:27:06.990 clat (msec): min=2, max=1161, avg=219.90, stdev=216.26 00:27:06.990 lat (msec): min=2, max=1161, avg=222.91, stdev=219.00 00:27:06.990 clat percentiles (msec): 00:27:06.990 | 1.00th=[ 9], 5.00th=[ 15], 10.00th=[ 22], 20.00th=[ 57], 00:27:06.990 | 30.00th=[ 79], 40.00th=[ 104], 50.00th=[ 125], 60.00th=[ 188], 00:27:06.990 | 70.00th=[ 236], 80.00th=[ 414], 90.00th=[ 542], 95.00th=[ 642], 00:27:06.990 | 99.00th=[ 1011], 99.50th=[ 1099], 99.90th=[ 1167], 99.95th=[ 1167], 00:27:06.990 | 99.99th=[ 1167] 00:27:06.990 bw ( KiB/s): min=22528, max=179712, per=8.26%, avg=74013.70, stdev=55565.15, samples=20 00:27:06.990 iops : min= 88, max= 702, avg=289.10, stdev=217.03, samples=20 00:27:06.990 lat (msec) : 4=0.20%, 10=2.13%, 20=6.26%, 50=5.31%, 100=24.60% 00:27:06.990 lat (msec) : 250=33.71%, 500=13.03%, 750=13.06%, 1000=0.64%, 2000=1.05% 00:27:06.990 cpu : usr=0.67%, sys=0.77%, ctx=1248, majf=0, minf=1 00:27:06.990 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:27:06.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.990 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:06.990 issued rwts: total=0,2955,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.990 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:06.990 00:27:06.990 Run status group 0 (all jobs): 00:27:06.990 WRITE: bw=875MiB/s (917MB/s), 39.4MiB/s-128MiB/s (41.3MB/s-135MB/s), io=9035MiB (9474MB), run=10062-10327msec 00:27:06.990 00:27:06.990 Disk stats (read/write): 00:27:06.990 nvme0n1: ios=48/7458, merge=0/0, ticks=1857/1202491, in_queue=1204348, util=99.52% 00:27:06.990 nvme10n1: ios=45/3181, merge=0/0, ticks=3690/1165403, in_queue=1169093, util=99.85% 00:27:06.990 nvme1n1: ios=47/4129, merge=0/0, ticks=1989/1202692, in_queue=1204681, util=99.95% 00:27:06.990 nvme2n1: ios=48/8119, merge=0/0, ticks=1828/1202550, in_queue=1204378, util=100.00% 00:27:06.990 nvme3n1: ios=49/9537, merge=0/0, ticks=1135/1215459, in_queue=1216594, util=100.00% 00:27:06.990 nvme4n1: ios=44/5209, merge=0/0, ticks=1759/1212995, in_queue=1214754, util=100.00% 00:27:06.990 nvme5n1: ios=38/8802, merge=0/0, ticks=1542/1211820, in_queue=1213362, util=100.00% 00:27:06.990 nvme6n1: ios=0/10087, merge=0/0, ticks=0/1220964, in_queue=1220964, util=98.35% 00:27:06.990 nvme7n1: ios=0/4429, merge=0/0, ticks=0/1200539, in_queue=1200539, util=98.81% 00:27:06.990 nvme8n1: ios=39/3964, merge=0/0, ticks=1381/1195202, in_queue=1196583, util=100.00% 00:27:06.990 nvme9n1: ios=47/5833, merge=0/0, ticks=3867/1178213, in_queue=1182080, util=100.00% 00:27:06.990 22:10:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:27:06.990 22:10:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:27:06.990 22:10:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:06.990 22:10:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:06.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:06.990 22:10:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:27:06.990 22:10:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:06.990 22:10:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:06.990 22:10:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:27:06.990 22:10:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:06.990 22:10:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:27:06.990 22:10:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:06.990 22:10:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:06.990 22:10:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.990 22:10:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:06.990 22:10:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.990 22:10:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:06.990 22:10:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:06.990 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:06.990 22:10:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:27:06.990 22:10:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:06.990 22:10:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:06.990 22:10:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:27:06.990 22:10:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:06.990 22:10:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:27:06.990 22:10:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:06.990 22:10:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:06.990 22:10:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.990 22:10:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:06.990 22:10:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.990 22:10:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:06.990 22:10:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:07.558 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:07.558 22:10:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:27:07.558 22:10:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:07.558 22:10:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:07.558 22:10:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:27:07.558 22:10:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:07.558 22:10:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:27:07.558 22:10:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:07.558 22:10:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:07.558 22:10:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.558 22:10:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.558 22:10:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.558 22:10:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:07.558 22:10:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:07.819 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:07.819 22:10:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:27:07.819 22:10:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:07.819 22:10:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:07.819 22:10:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:27:07.819 22:10:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:07.819 22:10:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:27:07.819 22:10:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:07.819 22:10:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:07.819 22:10:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.819 22:10:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:07.819 22:10:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.819 22:10:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:07.819 22:10:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:08.079 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:08.079 22:10:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:27:08.079 22:10:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:08.079 22:10:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:08.079 22:10:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:27:08.079 22:10:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:08.079 22:10:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:27:08.079 22:10:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:08.079 22:10:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:08.079 22:10:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.079 22:10:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.079 22:10:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.079 22:10:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:08.079 22:10:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:27:08.338 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:27:08.338 22:10:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:27:08.338 22:10:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:08.338 22:10:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:08.338 22:10:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:27:08.338 22:10:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:08.338 22:10:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:27:08.338 22:10:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:08.338 22:10:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:27:08.338 22:10:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.338 22:10:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.338 22:10:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.338 22:10:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:08.338 22:10:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:27:08.907 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:27:08.907 22:10:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:27:08.907 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:08.907 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:08.907 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:27:08.907 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:08.907 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:27:08.907 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:08.907 22:10:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:27:08.907 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.907 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:08.907 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.907 22:10:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:08.907 22:10:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:27:09.165 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:27:09.166 22:10:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:27:09.166 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:09.166 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:09.166 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:27:09.166 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:09.166 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:27:09.166 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:09.166 22:10:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:27:09.166 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.166 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:09.166 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.166 22:10:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:09.166 22:10:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:27:09.166 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:27:09.166 22:10:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:27:09.166 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:09.166 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:09.166 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:27:09.166 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:09.166 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:27:09.166 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:09.166 22:10:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:27:09.166 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.166 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:09.166 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.166 22:10:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:09.166 22:10:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:27:09.424 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:27:09.424 22:10:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:27:09.424 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:09.424 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:09.424 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:27:09.424 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:09.424 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:27:09.424 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:09.424 22:10:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:27:09.424 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.424 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:09.424 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.424 22:10:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:09.424 22:10:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:27:09.684 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:27:09.684 22:10:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:27:09.684 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:27:09.684 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:27:09.684 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:27:09.684 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:27:09.684 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:27:09.684 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:27:09.684 22:10:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:27:09.684 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.684 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:09.684 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.684 22:10:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:27:09.684 22:10:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:09.684 22:10:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:27:09.684 22:10:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:09.685 22:10:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:27:09.685 22:10:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:09.685 22:10:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:27:09.685 22:10:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:09.685 22:10:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:09.685 rmmod nvme_tcp 00:27:09.685 rmmod nvme_fabrics 00:27:09.685 rmmod nvme_keyring 00:27:09.685 22:10:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:09.685 22:10:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:27:09.685 22:10:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:27:09.685 22:10:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 4136464 ']' 00:27:09.685 22:10:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 4136464 00:27:09.685 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 4136464 ']' 00:27:09.685 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 4136464 00:27:09.685 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:27:09.685 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:09.685 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4136464 00:27:09.685 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:09.685 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:09.685 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4136464' 00:27:09.685 killing process with pid 4136464 00:27:09.685 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 4136464 00:27:09.685 22:10:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 4136464 00:27:12.975 22:10:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:12.975 22:10:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:12.975 22:10:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:12.975 22:10:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:12.975 22:10:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:12.975 22:10:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.975 22:10:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:12.976 22:10:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.895 22:10:34 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:14.895 00:27:14.895 real 1m5.720s 00:27:14.895 user 3m39.512s 00:27:14.895 sys 0m21.633s 00:27:14.895 22:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:14.895 22:10:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:14.895 ************************************ 00:27:14.895 END TEST nvmf_multiconnection 00:27:14.895 ************************************ 00:27:14.895 22:10:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:14.895 22:10:34 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:14.895 22:10:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:14.895 22:10:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:14.895 22:10:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:14.895 ************************************ 00:27:14.895 START TEST nvmf_initiator_timeout 00:27:14.895 ************************************ 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:14.895 * Looking for test storage... 00:27:14.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:27:14.895 22:10:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:16.801 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:16.802 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:16.802 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:16.802 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:16.802 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:16.802 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:17.059 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:17.059 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:17.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:17.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:27:17.059 00:27:17.060 --- 10.0.0.2 ping statistics --- 00:27:17.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.060 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:27:17.060 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:17.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:17.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:27:17.060 00:27:17.060 --- 10.0.0.1 ping statistics --- 00:27:17.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.060 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:27:17.060 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:17.060 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:27:17.060 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:17.060 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:17.060 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:17.060 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:17.060 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:17.060 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:17.060 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:17.060 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:17.060 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:17.060 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:17.060 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:17.060 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=4145546 00:27:17.060 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:17.060 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 4145546 00:27:17.060 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 4145546 ']' 00:27:17.060 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.060 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:17.060 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.060 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:17.060 22:10:36 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:17.060 [2024-07-13 22:10:36.325877] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:17.060 [2024-07-13 22:10:36.326034] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:17.060 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.319 [2024-07-13 22:10:36.459532] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:17.319 [2024-07-13 22:10:36.685093] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:17.319 [2024-07-13 22:10:36.685162] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:17.319 [2024-07-13 22:10:36.685201] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:17.319 [2024-07-13 22:10:36.685218] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:17.319 [2024-07-13 22:10:36.685251] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:17.319 [2024-07-13 22:10:36.685365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.319 [2024-07-13 22:10:36.685407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:17.319 [2024-07-13 22:10:36.685448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.319 [2024-07-13 22:10:36.685458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:17.890 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:17.890 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:27:17.890 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:17.890 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:17.890 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:18.183 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:18.183 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:18.183 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:18.183 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.183 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:18.183 Malloc0 00:27:18.183 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.183 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:18.183 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.183 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:18.183 Delay0 00:27:18.183 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.183 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:18.183 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.183 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:18.183 [2024-07-13 22:10:37.378675] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:18.183 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.183 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:18.183 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.183 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:18.183 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.183 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:18.183 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.183 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:18.183 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.183 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:18.183 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.183 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:18.183 [2024-07-13 22:10:37.407930] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:18.183 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.183 22:10:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:18.757 22:10:38 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:18.757 22:10:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:27:18.757 22:10:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:27:18.757 22:10:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:27:18.757 22:10:38 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:27:21.300 22:10:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:27:21.300 22:10:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:27:21.300 22:10:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:27:21.300 22:10:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:27:21.300 22:10:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:27:21.300 22:10:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:27:21.300 22:10:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=4145978 00:27:21.301 22:10:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:21.301 22:10:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:21.301 [global] 00:27:21.301 thread=1 00:27:21.301 invalidate=1 00:27:21.301 rw=write 00:27:21.301 time_based=1 00:27:21.301 runtime=60 00:27:21.301 ioengine=libaio 00:27:21.301 direct=1 00:27:21.301 bs=4096 00:27:21.301 iodepth=1 00:27:21.301 norandommap=0 00:27:21.301 numjobs=1 00:27:21.301 00:27:21.301 verify_dump=1 00:27:21.301 verify_backlog=512 00:27:21.301 verify_state_save=0 00:27:21.301 do_verify=1 00:27:21.301 verify=crc32c-intel 00:27:21.301 [job0] 00:27:21.301 filename=/dev/nvme0n1 00:27:21.301 Could not set queue depth (nvme0n1) 00:27:21.301 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:21.301 fio-3.35 00:27:21.301 Starting 1 thread 00:27:23.835 22:10:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:23.835 22:10:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.835 22:10:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:23.835 true 00:27:23.835 22:10:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.835 22:10:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:23.835 22:10:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.835 22:10:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:23.835 true 00:27:23.835 22:10:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.835 22:10:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:23.835 22:10:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.835 22:10:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:23.835 true 00:27:23.835 22:10:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.835 22:10:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:23.835 22:10:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.835 22:10:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:23.835 true 00:27:23.835 22:10:43 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.835 22:10:43 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:27.121 22:10:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:27.121 22:10:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.121 22:10:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:27.121 true 00:27:27.121 22:10:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.121 22:10:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:27.121 22:10:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.121 22:10:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:27.121 true 00:27:27.121 22:10:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.122 22:10:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:27.122 22:10:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.122 22:10:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:27.122 true 00:27:27.122 22:10:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.122 22:10:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:27.122 22:10:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.122 22:10:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:27.122 true 00:27:27.122 22:10:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.122 22:10:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:27.122 22:10:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 4145978 00:28:23.358 00:28:23.358 job0: (groupid=0, jobs=1): err= 0: pid=4146059: Sat Jul 13 22:11:40 2024 00:28:23.358 read: IOPS=57, BW=230KiB/s (236kB/s)(13.5MiB/60020msec) 00:28:23.358 slat (usec): min=5, max=14835, avg=23.78, stdev=290.53 00:28:23.358 clat (usec): min=384, max=40996k, avg=16930.62, stdev=697284.73 00:28:23.358 lat (usec): min=392, max=40996k, avg=16954.40, stdev=697284.53 00:28:23.358 clat percentiles (usec): 00:28:23.358 | 1.00th=[ 404], 5.00th=[ 445], 10.00th=[ 461], 00:28:23.358 | 20.00th=[ 490], 30.00th=[ 510], 40.00th=[ 529], 00:28:23.358 | 50.00th=[ 545], 60.00th=[ 578], 70.00th=[ 619], 00:28:23.358 | 80.00th=[ 652], 90.00th=[ 41157], 95.00th=[ 41157], 00:28:23.358 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:28:23.358 | 99.95th=[ 42730], 99.99th=[17112761] 00:28:23.358 write: IOPS=59, BW=239KiB/s (245kB/s)(14.0MiB/60020msec); 0 zone resets 00:28:23.358 slat (usec): min=7, max=26743, avg=26.26, stdev=446.54 00:28:23.358 clat (usec): min=262, max=621, avg=355.93, stdev=49.65 00:28:23.358 lat (usec): min=271, max=27158, avg=382.19, stdev=450.76 00:28:23.358 clat percentiles (usec): 00:28:23.358 | 1.00th=[ 273], 5.00th=[ 285], 10.00th=[ 293], 20.00th=[ 310], 00:28:23.358 | 30.00th=[ 322], 40.00th=[ 334], 50.00th=[ 351], 60.00th=[ 367], 00:28:23.358 | 70.00th=[ 392], 80.00th=[ 396], 90.00th=[ 420], 95.00th=[ 441], 00:28:23.358 | 99.00th=[ 482], 99.50th=[ 498], 99.90th=[ 523], 99.95th=[ 545], 00:28:23.358 | 99.99th=[ 619] 00:28:23.358 bw ( KiB/s): min= 1544, max= 4488, per=100.00%, avg=3584.00, stdev=1008.05, samples=8 00:28:23.358 iops : min= 386, max= 1122, avg=896.00, stdev=252.01, samples=8 00:28:23.358 lat (usec) : 500=62.92%, 750=31.46%, 1000=0.14% 00:28:23.358 lat (msec) : 50=5.47%, >=2000=0.01% 00:28:23.358 cpu : usr=0.15%, sys=0.28%, ctx=7046, majf=0, minf=2 00:28:23.358 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:23.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.358 issued rwts: total=3457,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.358 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:23.358 00:28:23.358 Run status group 0 (all jobs): 00:28:23.358 READ: bw=230KiB/s (236kB/s), 230KiB/s-230KiB/s (236kB/s-236kB/s), io=13.5MiB (14.2MB), run=60020-60020msec 00:28:23.358 WRITE: bw=239KiB/s (245kB/s), 239KiB/s-239KiB/s (245kB/s-245kB/s), io=14.0MiB (14.7MB), run=60020-60020msec 00:28:23.358 00:28:23.358 Disk stats (read/write): 00:28:23.358 nvme0n1: ios=3507/3584, merge=0/0, ticks=18651/1215, in_queue=19866, util=99.92% 00:28:23.358 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:23.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:23.358 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:23.358 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:28:23.358 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:28:23.358 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:23.358 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:28:23.358 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:23.359 nvmf hotplug test: fio successful as expected 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:23.359 rmmod nvme_tcp 00:28:23.359 rmmod nvme_fabrics 00:28:23.359 rmmod nvme_keyring 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 4145546 ']' 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 4145546 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 4145546 ']' 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 4145546 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4145546 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4145546' 00:28:23.359 killing process with pid 4145546 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 4145546 00:28:23.359 22:11:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 4145546 00:28:23.359 22:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:23.359 22:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:23.359 22:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:23.359 22:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:23.359 22:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:23.359 22:11:42 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.359 22:11:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:23.359 22:11:42 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.261 22:11:44 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:25.261 00:28:25.261 real 1m9.970s 00:28:25.261 user 4m15.981s 00:28:25.261 sys 0m6.533s 00:28:25.261 22:11:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:25.261 22:11:44 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:25.261 ************************************ 00:28:25.261 END TEST nvmf_initiator_timeout 00:28:25.261 ************************************ 00:28:25.261 22:11:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:25.261 22:11:44 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:28:25.261 22:11:44 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:28:25.261 22:11:44 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:28:25.261 22:11:44 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:28:25.261 22:11:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:27.165 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:27.165 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:27.165 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:27.165 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:28:27.165 22:11:46 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:27.165 22:11:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:27.165 22:11:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:27.165 22:11:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:27.165 ************************************ 00:28:27.165 START TEST nvmf_perf_adq 00:28:27.165 ************************************ 00:28:27.165 22:11:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:27.165 * Looking for test storage... 00:28:27.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:27.165 22:11:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:27.165 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:27.165 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:27.165 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:27.165 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:27.165 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:27.165 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:27.165 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:27.165 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:27.165 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:27.165 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:27.165 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:27.165 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:27.165 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:27.165 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:27.165 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:27.165 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:27.165 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:27.165 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:27.165 22:11:46 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:27.165 22:11:46 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.165 22:11:46 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.165 22:11:46 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.165 22:11:46 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.165 22:11:46 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.165 22:11:46 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:27.166 22:11:46 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.166 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:28:27.166 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:27.166 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:27.166 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:27.166 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:27.166 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:27.166 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:27.166 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:27.166 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:27.166 22:11:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:27.166 22:11:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:27.166 22:11:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:29.098 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:29.098 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:29.098 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:29.098 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:28:29.098 22:11:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:28:29.357 22:11:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:28:31.894 22:11:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:37.166 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:37.166 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:37.166 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:37.167 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:37.167 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:37.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:37.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:28:37.167 00:28:37.167 --- 10.0.0.2 ping statistics --- 00:28:37.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.167 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:37.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:37.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:28:37.167 00:28:37.167 --- 10.0.0.1 ping statistics --- 00:28:37.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.167 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=4157678 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 4157678 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 4157678 ']' 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:37.167 22:11:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:37.167 [2024-07-13 22:11:55.981980] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:37.167 [2024-07-13 22:11:55.982137] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:37.167 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.167 [2024-07-13 22:11:56.119089] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:37.167 [2024-07-13 22:11:56.352485] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:37.167 [2024-07-13 22:11:56.352549] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:37.167 [2024-07-13 22:11:56.352588] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:37.167 [2024-07-13 22:11:56.352606] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:37.167 [2024-07-13 22:11:56.352624] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:37.167 [2024-07-13 22:11:56.352757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.167 [2024-07-13 22:11:56.352825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:37.167 [2024-07-13 22:11:56.352872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.167 [2024-07-13 22:11:56.352885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:37.734 22:11:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:37.734 22:11:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:28:37.734 22:11:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:37.734 22:11:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:37.734 22:11:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:37.734 22:11:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:37.734 22:11:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:28:37.734 22:11:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:37.734 22:11:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:37.734 22:11:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.734 22:11:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:37.734 22:11:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.734 22:11:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:37.734 22:11:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:37.734 22:11:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.734 22:11:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:37.734 22:11:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.734 22:11:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:37.734 22:11:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.734 22:11:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:37.994 22:11:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.994 22:11:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:37.994 22:11:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.994 22:11:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:37.994 [2024-07-13 22:11:57.377196] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:37.994 22:11:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.994 22:11:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:37.994 22:11:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.994 22:11:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.253 Malloc1 00:28:38.253 22:11:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.253 22:11:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:38.253 22:11:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.253 22:11:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.253 22:11:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.253 22:11:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:38.253 22:11:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.253 22:11:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.253 22:11:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.253 22:11:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:38.253 22:11:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.253 22:11:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.253 [2024-07-13 22:11:57.482696] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:38.253 22:11:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.253 22:11:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=4157960 00:28:38.253 22:11:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:28:38.253 22:11:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:38.253 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.170 22:11:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:28:40.170 22:11:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.170 22:11:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:40.170 22:11:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.170 22:11:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:28:40.170 "tick_rate": 2700000000, 00:28:40.170 "poll_groups": [ 00:28:40.170 { 00:28:40.170 "name": "nvmf_tgt_poll_group_000", 00:28:40.170 "admin_qpairs": 1, 00:28:40.170 "io_qpairs": 1, 00:28:40.170 "current_admin_qpairs": 1, 00:28:40.170 "current_io_qpairs": 1, 00:28:40.170 "pending_bdev_io": 0, 00:28:40.170 "completed_nvme_io": 17426, 00:28:40.170 "transports": [ 00:28:40.170 { 00:28:40.170 "trtype": "TCP" 00:28:40.170 } 00:28:40.170 ] 00:28:40.170 }, 00:28:40.170 { 00:28:40.170 "name": "nvmf_tgt_poll_group_001", 00:28:40.170 "admin_qpairs": 0, 00:28:40.170 "io_qpairs": 1, 00:28:40.170 "current_admin_qpairs": 0, 00:28:40.170 "current_io_qpairs": 1, 00:28:40.170 "pending_bdev_io": 0, 00:28:40.170 "completed_nvme_io": 15040, 00:28:40.170 "transports": [ 00:28:40.170 { 00:28:40.170 "trtype": "TCP" 00:28:40.170 } 00:28:40.170 ] 00:28:40.170 }, 00:28:40.170 { 00:28:40.170 "name": "nvmf_tgt_poll_group_002", 00:28:40.170 "admin_qpairs": 0, 00:28:40.170 "io_qpairs": 1, 00:28:40.170 "current_admin_qpairs": 0, 00:28:40.170 "current_io_qpairs": 1, 00:28:40.170 "pending_bdev_io": 0, 00:28:40.170 "completed_nvme_io": 14281, 00:28:40.170 "transports": [ 00:28:40.170 { 00:28:40.170 "trtype": "TCP" 00:28:40.170 } 00:28:40.170 ] 00:28:40.170 }, 00:28:40.170 { 00:28:40.170 "name": "nvmf_tgt_poll_group_003", 00:28:40.170 "admin_qpairs": 0, 00:28:40.170 "io_qpairs": 1, 00:28:40.170 "current_admin_qpairs": 0, 00:28:40.170 "current_io_qpairs": 1, 00:28:40.170 "pending_bdev_io": 0, 00:28:40.170 "completed_nvme_io": 17301, 00:28:40.170 "transports": [ 00:28:40.170 { 00:28:40.170 "trtype": "TCP" 00:28:40.170 } 00:28:40.170 ] 00:28:40.170 } 00:28:40.170 ] 00:28:40.170 }' 00:28:40.170 22:11:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:40.170 22:11:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:28:40.170 22:11:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:28:40.170 22:11:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:28:40.170 22:11:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 4157960 00:28:50.143 Initializing NVMe Controllers 00:28:50.143 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:50.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:50.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:50.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:50.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:50.143 Initialization complete. Launching workers. 00:28:50.143 ======================================================== 00:28:50.143 Latency(us) 00:28:50.143 Device Information : IOPS MiB/s Average min max 00:28:50.143 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9455.90 36.94 6769.27 2451.90 12419.21 00:28:50.143 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8185.20 31.97 7819.88 3408.00 11950.75 00:28:50.143 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7752.60 30.28 8256.93 3861.32 12155.82 00:28:50.143 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9496.60 37.10 6741.01 2222.95 10004.85 00:28:50.143 ======================================================== 00:28:50.143 Total : 34890.29 136.29 7338.61 2222.95 12419.21 00:28:50.143 00:28:50.143 22:12:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:28:50.143 22:12:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:50.143 22:12:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:28:50.143 22:12:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:50.143 22:12:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:28:50.143 22:12:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:50.143 22:12:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:50.143 rmmod nvme_tcp 00:28:50.143 rmmod nvme_fabrics 00:28:50.143 rmmod nvme_keyring 00:28:50.143 22:12:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:50.143 22:12:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:28:50.143 22:12:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:28:50.143 22:12:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 4157678 ']' 00:28:50.143 22:12:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 4157678 00:28:50.143 22:12:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 4157678 ']' 00:28:50.143 22:12:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 4157678 00:28:50.143 22:12:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:28:50.143 22:12:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:50.143 22:12:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4157678 00:28:50.143 22:12:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:50.143 22:12:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:50.143 22:12:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4157678' 00:28:50.143 killing process with pid 4157678 00:28:50.143 22:12:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 4157678 00:28:50.143 22:12:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 4157678 00:28:50.143 22:12:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:50.143 22:12:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:50.143 22:12:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:50.143 22:12:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:50.143 22:12:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:50.143 22:12:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.143 22:12:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:50.143 22:12:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.052 22:12:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:52.052 22:12:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:28:52.052 22:12:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:28:52.620 22:12:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:28:54.587 22:12:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:28:59.867 22:12:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:28:59.867 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:59.867 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.867 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:59.867 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:59.867 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:59.867 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.867 22:12:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:59.867 22:12:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.867 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:59.867 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:59.868 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:59.868 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:59.868 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:59.868 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:59.868 22:12:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:59.868 22:12:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:59.868 22:12:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:59.868 22:12:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:59.868 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.868 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:28:59.868 00:28:59.868 --- 10.0.0.2 ping statistics --- 00:28:59.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.868 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:28:59.868 22:12:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:59.868 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.868 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:28:59.868 00:28:59.868 --- 10.0.0.1 ping statistics --- 00:28:59.868 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.868 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:28:59.868 22:12:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.868 22:12:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:28:59.868 22:12:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:59.868 22:12:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.868 22:12:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:59.868 22:12:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:59.868 22:12:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.868 22:12:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:59.868 22:12:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:59.868 22:12:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:28:59.868 22:12:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:59.868 22:12:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:59.868 22:12:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:59.868 net.core.busy_poll = 1 00:28:59.868 22:12:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:59.868 net.core.busy_read = 1 00:28:59.868 22:12:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:59.868 22:12:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:59.868 22:12:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:59.868 22:12:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:59.868 22:12:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:59.869 22:12:19 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:59.869 22:12:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:59.869 22:12:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:59.869 22:12:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:59.869 22:12:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=4161325 00:28:59.869 22:12:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:59.869 22:12:19 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 4161325 00:28:59.869 22:12:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 4161325 ']' 00:28:59.869 22:12:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.869 22:12:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:59.869 22:12:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.869 22:12:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:59.869 22:12:19 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.127 [2024-07-13 22:12:19.292022] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:00.127 [2024-07-13 22:12:19.292176] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:00.127 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.128 [2024-07-13 22:12:19.427134] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:00.386 [2024-07-13 22:12:19.689235] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:00.386 [2024-07-13 22:12:19.689296] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:00.386 [2024-07-13 22:12:19.689334] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:00.386 [2024-07-13 22:12:19.689352] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:00.386 [2024-07-13 22:12:19.689369] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:00.386 [2024-07-13 22:12:19.689522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.386 [2024-07-13 22:12:19.689573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:00.386 [2024-07-13 22:12:19.689612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.386 [2024-07-13 22:12:19.689623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:00.951 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:00.951 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:29:00.951 22:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:00.951 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:00.951 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.952 22:12:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.952 22:12:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:29:00.952 22:12:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:00.952 22:12:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:00.952 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:00.952 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.952 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.210 22:12:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:01.210 22:12:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:29:01.210 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.210 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:01.210 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.210 22:12:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:01.210 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.210 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:01.468 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.468 22:12:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:29:01.468 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.468 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:01.468 [2024-07-13 22:12:20.702623] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:01.468 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.468 22:12:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:01.468 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.468 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:01.468 Malloc1 00:29:01.468 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.468 22:12:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:01.468 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.468 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:01.468 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.468 22:12:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:01.468 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.468 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:01.468 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.468 22:12:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:01.468 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:01.468 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:01.468 [2024-07-13 22:12:20.806231] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:01.468 22:12:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:01.468 22:12:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=4161493 00:29:01.468 22:12:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:29:01.468 22:12:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:01.726 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.626 22:12:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:29:03.626 22:12:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:03.626 22:12:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:03.626 22:12:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:03.626 22:12:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:29:03.626 "tick_rate": 2700000000, 00:29:03.626 "poll_groups": [ 00:29:03.626 { 00:29:03.626 "name": "nvmf_tgt_poll_group_000", 00:29:03.626 "admin_qpairs": 1, 00:29:03.626 "io_qpairs": 2, 00:29:03.626 "current_admin_qpairs": 1, 00:29:03.626 "current_io_qpairs": 2, 00:29:03.626 "pending_bdev_io": 0, 00:29:03.626 "completed_nvme_io": 18998, 00:29:03.626 "transports": [ 00:29:03.626 { 00:29:03.626 "trtype": "TCP" 00:29:03.626 } 00:29:03.626 ] 00:29:03.626 }, 00:29:03.626 { 00:29:03.626 "name": "nvmf_tgt_poll_group_001", 00:29:03.626 "admin_qpairs": 0, 00:29:03.626 "io_qpairs": 2, 00:29:03.626 "current_admin_qpairs": 0, 00:29:03.626 "current_io_qpairs": 2, 00:29:03.626 "pending_bdev_io": 0, 00:29:03.626 "completed_nvme_io": 20500, 00:29:03.626 "transports": [ 00:29:03.626 { 00:29:03.626 "trtype": "TCP" 00:29:03.626 } 00:29:03.626 ] 00:29:03.626 }, 00:29:03.626 { 00:29:03.626 "name": "nvmf_tgt_poll_group_002", 00:29:03.626 "admin_qpairs": 0, 00:29:03.626 "io_qpairs": 0, 00:29:03.626 "current_admin_qpairs": 0, 00:29:03.626 "current_io_qpairs": 0, 00:29:03.626 "pending_bdev_io": 0, 00:29:03.626 "completed_nvme_io": 0, 00:29:03.626 "transports": [ 00:29:03.626 { 00:29:03.626 "trtype": "TCP" 00:29:03.626 } 00:29:03.626 ] 00:29:03.626 }, 00:29:03.626 { 00:29:03.626 "name": "nvmf_tgt_poll_group_003", 00:29:03.626 "admin_qpairs": 0, 00:29:03.626 "io_qpairs": 0, 00:29:03.626 "current_admin_qpairs": 0, 00:29:03.626 "current_io_qpairs": 0, 00:29:03.626 "pending_bdev_io": 0, 00:29:03.626 "completed_nvme_io": 0, 00:29:03.626 "transports": [ 00:29:03.626 { 00:29:03.626 "trtype": "TCP" 00:29:03.626 } 00:29:03.626 ] 00:29:03.626 } 00:29:03.626 ] 00:29:03.626 }' 00:29:03.626 22:12:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:29:03.626 22:12:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:29:03.626 22:12:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:29:03.626 22:12:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:29:03.626 22:12:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 4161493 00:29:11.733 Initializing NVMe Controllers 00:29:11.733 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:11.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:11.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:11.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:11.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:11.733 Initialization complete. Launching workers. 00:29:11.733 ======================================================== 00:29:11.733 Latency(us) 00:29:11.733 Device Information : IOPS MiB/s Average min max 00:29:11.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4935.60 19.28 12969.02 2417.49 57456.12 00:29:11.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5206.50 20.34 12326.88 2497.05 59388.00 00:29:11.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4806.50 18.78 13324.70 3321.00 57882.96 00:29:11.733 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5939.10 23.20 10782.83 2228.20 57209.76 00:29:11.733 ======================================================== 00:29:11.733 Total : 20887.70 81.59 12269.19 2228.20 59388.00 00:29:11.733 00:29:11.733 22:12:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:29:11.733 22:12:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:11.733 22:12:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:29:11.733 22:12:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:11.733 22:12:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:29:11.733 22:12:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:11.733 22:12:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:11.733 rmmod nvme_tcp 00:29:11.733 rmmod nvme_fabrics 00:29:11.733 rmmod nvme_keyring 00:29:11.733 22:12:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:11.733 22:12:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:29:11.733 22:12:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:29:11.733 22:12:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 4161325 ']' 00:29:11.733 22:12:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 4161325 00:29:11.733 22:12:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 4161325 ']' 00:29:11.733 22:12:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 4161325 00:29:11.734 22:12:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:29:11.734 22:12:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:11.734 22:12:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4161325 00:29:11.734 22:12:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:11.734 22:12:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:11.734 22:12:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4161325' 00:29:11.734 killing process with pid 4161325 00:29:11.734 22:12:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 4161325 00:29:11.734 22:12:31 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 4161325 00:29:13.633 22:12:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:13.633 22:12:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:13.633 22:12:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:13.633 22:12:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:13.633 22:12:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:13.633 22:12:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.633 22:12:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:13.633 22:12:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.533 22:12:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:15.533 22:12:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:15.533 00:29:15.533 real 0m48.416s 00:29:15.533 user 2m45.542s 00:29:15.533 sys 0m12.836s 00:29:15.533 22:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:15.533 22:12:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:15.533 ************************************ 00:29:15.533 END TEST nvmf_perf_adq 00:29:15.533 ************************************ 00:29:15.533 22:12:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:15.533 22:12:34 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:15.533 22:12:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:15.533 22:12:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:15.533 22:12:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:15.533 ************************************ 00:29:15.533 START TEST nvmf_shutdown 00:29:15.533 ************************************ 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:15.533 * Looking for test storage... 00:29:15.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:15.533 ************************************ 00:29:15.533 START TEST nvmf_shutdown_tc1 00:29:15.533 ************************************ 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:15.533 22:12:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:17.430 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:17.430 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:17.430 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:17.431 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:17.431 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:17.431 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:17.431 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:17.431 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:17.432 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:17.432 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:17.432 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:17.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:17.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:29:17.689 00:29:17.689 --- 10.0.0.2 ping statistics --- 00:29:17.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.689 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:17.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:17.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:29:17.689 00:29:17.689 --- 10.0.0.1 ping statistics --- 00:29:17.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.689 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=4164891 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 4164891 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 4164891 ']' 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:17.689 22:12:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:17.689 [2024-07-13 22:12:37.015399] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:17.689 [2024-07-13 22:12:37.015543] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:17.947 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.947 [2024-07-13 22:12:37.150829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:18.204 [2024-07-13 22:12:37.404252] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:18.204 [2024-07-13 22:12:37.404332] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:18.204 [2024-07-13 22:12:37.404359] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:18.204 [2024-07-13 22:12:37.404380] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:18.204 [2024-07-13 22:12:37.404401] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:18.204 [2024-07-13 22:12:37.404539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:18.204 [2024-07-13 22:12:37.404662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:18.204 [2024-07-13 22:12:37.404701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.204 [2024-07-13 22:12:37.404711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:18.766 22:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:18.766 22:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:29:18.766 22:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:18.766 22:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:18.766 22:12:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:18.766 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:18.766 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:18.766 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.766 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:18.766 [2024-07-13 22:12:38.010417] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:18.766 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.766 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:18.766 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:18.766 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:18.766 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:18.766 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:18.766 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:18.766 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:18.766 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:18.766 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:18.766 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:18.766 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:18.766 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:18.766 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:18.766 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:18.766 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:18.766 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:18.766 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:18.767 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:18.767 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:18.767 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:18.767 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:18.767 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:18.767 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:18.767 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:18.767 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:29:18.767 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:18.767 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.767 22:12:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:18.767 Malloc1 00:29:18.767 [2024-07-13 22:12:38.146701] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.052 Malloc2 00:29:19.052 Malloc3 00:29:19.052 Malloc4 00:29:19.310 Malloc5 00:29:19.310 Malloc6 00:29:19.567 Malloc7 00:29:19.567 Malloc8 00:29:19.567 Malloc9 00:29:19.825 Malloc10 00:29:19.825 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.825 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:19.825 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:19.825 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:19.825 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=4165090 00:29:19.825 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 4165090 /var/tmp/bdevperf.sock 00:29:19.825 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 4165090 ']' 00:29:19.825 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:19.825 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:19.825 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:19.825 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:19.825 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:29:19.825 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:19.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:19.825 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:29:19.825 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:19.825 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:19.825 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:19.825 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:19.825 { 00:29:19.825 "params": { 00:29:19.825 "name": "Nvme$subsystem", 00:29:19.825 "trtype": "$TEST_TRANSPORT", 00:29:19.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:19.825 "adrfam": "ipv4", 00:29:19.825 "trsvcid": "$NVMF_PORT", 00:29:19.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:19.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:19.825 "hdgst": ${hdgst:-false}, 00:29:19.825 "ddgst": ${ddgst:-false} 00:29:19.825 }, 00:29:19.825 "method": "bdev_nvme_attach_controller" 00:29:19.825 } 00:29:19.825 EOF 00:29:19.825 )") 00:29:19.825 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:19.825 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:19.825 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:19.825 { 00:29:19.825 "params": { 00:29:19.825 "name": "Nvme$subsystem", 00:29:19.825 "trtype": "$TEST_TRANSPORT", 00:29:19.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:19.825 "adrfam": "ipv4", 00:29:19.825 "trsvcid": "$NVMF_PORT", 00:29:19.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:19.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:19.825 "hdgst": ${hdgst:-false}, 00:29:19.825 "ddgst": ${ddgst:-false} 00:29:19.825 }, 00:29:19.825 "method": "bdev_nvme_attach_controller" 00:29:19.825 } 00:29:19.825 EOF 00:29:19.825 )") 00:29:19.825 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:19.825 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:19.825 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:19.825 { 00:29:19.825 "params": { 00:29:19.825 "name": "Nvme$subsystem", 00:29:19.825 "trtype": "$TEST_TRANSPORT", 00:29:19.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:19.825 "adrfam": "ipv4", 00:29:19.825 "trsvcid": "$NVMF_PORT", 00:29:19.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:19.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:19.825 "hdgst": ${hdgst:-false}, 00:29:19.825 "ddgst": ${ddgst:-false} 00:29:19.825 }, 00:29:19.825 "method": "bdev_nvme_attach_controller" 00:29:19.825 } 00:29:19.825 EOF 00:29:19.825 )") 00:29:19.825 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:19.825 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:19.825 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:19.825 { 00:29:19.825 "params": { 00:29:19.825 "name": "Nvme$subsystem", 00:29:19.825 "trtype": "$TEST_TRANSPORT", 00:29:19.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:19.825 "adrfam": "ipv4", 00:29:19.825 "trsvcid": "$NVMF_PORT", 00:29:19.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:19.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:19.826 "hdgst": ${hdgst:-false}, 00:29:19.826 "ddgst": ${ddgst:-false} 00:29:19.826 }, 00:29:19.826 "method": "bdev_nvme_attach_controller" 00:29:19.826 } 00:29:19.826 EOF 00:29:19.826 )") 00:29:19.826 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:19.826 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:19.826 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:19.826 { 00:29:19.826 "params": { 00:29:19.826 "name": "Nvme$subsystem", 00:29:19.826 "trtype": "$TEST_TRANSPORT", 00:29:19.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:19.826 "adrfam": "ipv4", 00:29:19.826 "trsvcid": "$NVMF_PORT", 00:29:19.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:19.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:19.826 "hdgst": ${hdgst:-false}, 00:29:19.826 "ddgst": ${ddgst:-false} 00:29:19.826 }, 00:29:19.826 "method": "bdev_nvme_attach_controller" 00:29:19.826 } 00:29:19.826 EOF 00:29:19.826 )") 00:29:19.826 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:19.826 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:19.826 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:19.826 { 00:29:19.826 "params": { 00:29:19.826 "name": "Nvme$subsystem", 00:29:19.826 "trtype": "$TEST_TRANSPORT", 00:29:19.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:19.826 "adrfam": "ipv4", 00:29:19.826 "trsvcid": "$NVMF_PORT", 00:29:19.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:19.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:19.826 "hdgst": ${hdgst:-false}, 00:29:19.826 "ddgst": ${ddgst:-false} 00:29:19.826 }, 00:29:19.826 "method": "bdev_nvme_attach_controller" 00:29:19.826 } 00:29:19.826 EOF 00:29:19.826 )") 00:29:19.826 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:19.826 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:19.826 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:19.826 { 00:29:19.826 "params": { 00:29:19.826 "name": "Nvme$subsystem", 00:29:19.826 "trtype": "$TEST_TRANSPORT", 00:29:19.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:19.826 "adrfam": "ipv4", 00:29:19.826 "trsvcid": "$NVMF_PORT", 00:29:19.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:19.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:19.826 "hdgst": ${hdgst:-false}, 00:29:19.826 "ddgst": ${ddgst:-false} 00:29:19.826 }, 00:29:19.826 "method": "bdev_nvme_attach_controller" 00:29:19.826 } 00:29:19.826 EOF 00:29:19.826 )") 00:29:19.826 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:19.826 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:19.826 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:19.826 { 00:29:19.826 "params": { 00:29:19.826 "name": "Nvme$subsystem", 00:29:19.826 "trtype": "$TEST_TRANSPORT", 00:29:19.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:19.826 "adrfam": "ipv4", 00:29:19.826 "trsvcid": "$NVMF_PORT", 00:29:19.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:19.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:19.826 "hdgst": ${hdgst:-false}, 00:29:19.826 "ddgst": ${ddgst:-false} 00:29:19.826 }, 00:29:19.826 "method": "bdev_nvme_attach_controller" 00:29:19.826 } 00:29:19.826 EOF 00:29:19.826 )") 00:29:19.826 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:19.826 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:19.826 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:19.826 { 00:29:19.826 "params": { 00:29:19.826 "name": "Nvme$subsystem", 00:29:19.826 "trtype": "$TEST_TRANSPORT", 00:29:19.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:19.826 "adrfam": "ipv4", 00:29:19.826 "trsvcid": "$NVMF_PORT", 00:29:19.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:19.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:19.826 "hdgst": ${hdgst:-false}, 00:29:19.826 "ddgst": ${ddgst:-false} 00:29:19.826 }, 00:29:19.826 "method": "bdev_nvme_attach_controller" 00:29:19.826 } 00:29:19.826 EOF 00:29:19.826 )") 00:29:19.826 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:19.826 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:19.826 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:19.826 { 00:29:19.826 "params": { 00:29:19.826 "name": "Nvme$subsystem", 00:29:19.826 "trtype": "$TEST_TRANSPORT", 00:29:19.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:19.826 "adrfam": "ipv4", 00:29:19.826 "trsvcid": "$NVMF_PORT", 00:29:19.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:19.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:19.826 "hdgst": ${hdgst:-false}, 00:29:19.826 "ddgst": ${ddgst:-false} 00:29:19.826 }, 00:29:19.826 "method": "bdev_nvme_attach_controller" 00:29:19.826 } 00:29:19.826 EOF 00:29:19.826 )") 00:29:19.826 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:19.826 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:29:19.826 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:29:19.826 22:12:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:19.826 "params": { 00:29:19.826 "name": "Nvme1", 00:29:19.826 "trtype": "tcp", 00:29:19.826 "traddr": "10.0.0.2", 00:29:19.826 "adrfam": "ipv4", 00:29:19.826 "trsvcid": "4420", 00:29:19.826 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:19.826 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:19.826 "hdgst": false, 00:29:19.826 "ddgst": false 00:29:19.826 }, 00:29:19.826 "method": "bdev_nvme_attach_controller" 00:29:19.826 },{ 00:29:19.826 "params": { 00:29:19.826 "name": "Nvme2", 00:29:19.826 "trtype": "tcp", 00:29:19.826 "traddr": "10.0.0.2", 00:29:19.826 "adrfam": "ipv4", 00:29:19.826 "trsvcid": "4420", 00:29:19.826 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:19.826 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:19.826 "hdgst": false, 00:29:19.826 "ddgst": false 00:29:19.826 }, 00:29:19.826 "method": "bdev_nvme_attach_controller" 00:29:19.826 },{ 00:29:19.826 "params": { 00:29:19.826 "name": "Nvme3", 00:29:19.826 "trtype": "tcp", 00:29:19.826 "traddr": "10.0.0.2", 00:29:19.826 "adrfam": "ipv4", 00:29:19.826 "trsvcid": "4420", 00:29:19.826 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:19.826 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:19.826 "hdgst": false, 00:29:19.826 "ddgst": false 00:29:19.826 }, 00:29:19.826 "method": "bdev_nvme_attach_controller" 00:29:19.826 },{ 00:29:19.826 "params": { 00:29:19.826 "name": "Nvme4", 00:29:19.826 "trtype": "tcp", 00:29:19.826 "traddr": "10.0.0.2", 00:29:19.826 "adrfam": "ipv4", 00:29:19.826 "trsvcid": "4420", 00:29:19.826 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:19.826 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:19.826 "hdgst": false, 00:29:19.826 "ddgst": false 00:29:19.826 }, 00:29:19.826 "method": "bdev_nvme_attach_controller" 00:29:19.826 },{ 00:29:19.826 "params": { 00:29:19.826 "name": "Nvme5", 00:29:19.826 "trtype": "tcp", 00:29:19.826 "traddr": "10.0.0.2", 00:29:19.826 "adrfam": "ipv4", 00:29:19.826 "trsvcid": "4420", 00:29:19.826 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:19.826 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:19.826 "hdgst": false, 00:29:19.826 "ddgst": false 00:29:19.826 }, 00:29:19.826 "method": "bdev_nvme_attach_controller" 00:29:19.826 },{ 00:29:19.826 "params": { 00:29:19.826 "name": "Nvme6", 00:29:19.826 "trtype": "tcp", 00:29:19.826 "traddr": "10.0.0.2", 00:29:19.826 "adrfam": "ipv4", 00:29:19.826 "trsvcid": "4420", 00:29:19.826 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:19.826 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:19.826 "hdgst": false, 00:29:19.826 "ddgst": false 00:29:19.826 }, 00:29:19.826 "method": "bdev_nvme_attach_controller" 00:29:19.826 },{ 00:29:19.826 "params": { 00:29:19.826 "name": "Nvme7", 00:29:19.826 "trtype": "tcp", 00:29:19.826 "traddr": "10.0.0.2", 00:29:19.826 "adrfam": "ipv4", 00:29:19.826 "trsvcid": "4420", 00:29:19.826 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:19.826 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:19.826 "hdgst": false, 00:29:19.826 "ddgst": false 00:29:19.826 }, 00:29:19.826 "method": "bdev_nvme_attach_controller" 00:29:19.826 },{ 00:29:19.826 "params": { 00:29:19.826 "name": "Nvme8", 00:29:19.826 "trtype": "tcp", 00:29:19.826 "traddr": "10.0.0.2", 00:29:19.826 "adrfam": "ipv4", 00:29:19.826 "trsvcid": "4420", 00:29:19.826 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:19.826 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:19.826 "hdgst": false, 00:29:19.826 "ddgst": false 00:29:19.826 }, 00:29:19.826 "method": "bdev_nvme_attach_controller" 00:29:19.826 },{ 00:29:19.826 "params": { 00:29:19.826 "name": "Nvme9", 00:29:19.826 "trtype": "tcp", 00:29:19.826 "traddr": "10.0.0.2", 00:29:19.826 "adrfam": "ipv4", 00:29:19.826 "trsvcid": "4420", 00:29:19.826 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:19.826 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:19.826 "hdgst": false, 00:29:19.826 "ddgst": false 00:29:19.827 }, 00:29:19.827 "method": "bdev_nvme_attach_controller" 00:29:19.827 },{ 00:29:19.827 "params": { 00:29:19.827 "name": "Nvme10", 00:29:19.827 "trtype": "tcp", 00:29:19.827 "traddr": "10.0.0.2", 00:29:19.827 "adrfam": "ipv4", 00:29:19.827 "trsvcid": "4420", 00:29:19.827 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:19.827 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:19.827 "hdgst": false, 00:29:19.827 "ddgst": false 00:29:19.827 }, 00:29:19.827 "method": "bdev_nvme_attach_controller" 00:29:19.827 }' 00:29:19.827 [2024-07-13 22:12:39.148043] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:19.827 [2024-07-13 22:12:39.148204] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:20.084 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.084 [2024-07-13 22:12:39.290757] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.342 [2024-07-13 22:12:39.531068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.868 22:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:22.868 22:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:29:22.868 22:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:22.868 22:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:22.868 22:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:22.868 22:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.868 22:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 4165090 00:29:22.868 22:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:29:22.868 22:12:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:29:23.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 4165090 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 4164891 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.800 { 00:29:23.800 "params": { 00:29:23.800 "name": "Nvme$subsystem", 00:29:23.800 "trtype": "$TEST_TRANSPORT", 00:29:23.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.800 "adrfam": "ipv4", 00:29:23.800 "trsvcid": "$NVMF_PORT", 00:29:23.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.800 "hdgst": ${hdgst:-false}, 00:29:23.800 "ddgst": ${ddgst:-false} 00:29:23.800 }, 00:29:23.800 "method": "bdev_nvme_attach_controller" 00:29:23.800 } 00:29:23.800 EOF 00:29:23.800 )") 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.800 { 00:29:23.800 "params": { 00:29:23.800 "name": "Nvme$subsystem", 00:29:23.800 "trtype": "$TEST_TRANSPORT", 00:29:23.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.800 "adrfam": "ipv4", 00:29:23.800 "trsvcid": "$NVMF_PORT", 00:29:23.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.800 "hdgst": ${hdgst:-false}, 00:29:23.800 "ddgst": ${ddgst:-false} 00:29:23.800 }, 00:29:23.800 "method": "bdev_nvme_attach_controller" 00:29:23.800 } 00:29:23.800 EOF 00:29:23.800 )") 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.800 { 00:29:23.800 "params": { 00:29:23.800 "name": "Nvme$subsystem", 00:29:23.800 "trtype": "$TEST_TRANSPORT", 00:29:23.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.800 "adrfam": "ipv4", 00:29:23.800 "trsvcid": "$NVMF_PORT", 00:29:23.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.800 "hdgst": ${hdgst:-false}, 00:29:23.800 "ddgst": ${ddgst:-false} 00:29:23.800 }, 00:29:23.800 "method": "bdev_nvme_attach_controller" 00:29:23.800 } 00:29:23.800 EOF 00:29:23.800 )") 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.800 { 00:29:23.800 "params": { 00:29:23.800 "name": "Nvme$subsystem", 00:29:23.800 "trtype": "$TEST_TRANSPORT", 00:29:23.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.800 "adrfam": "ipv4", 00:29:23.800 "trsvcid": "$NVMF_PORT", 00:29:23.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.800 "hdgst": ${hdgst:-false}, 00:29:23.800 "ddgst": ${ddgst:-false} 00:29:23.800 }, 00:29:23.800 "method": "bdev_nvme_attach_controller" 00:29:23.800 } 00:29:23.800 EOF 00:29:23.800 )") 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.800 { 00:29:23.800 "params": { 00:29:23.800 "name": "Nvme$subsystem", 00:29:23.800 "trtype": "$TEST_TRANSPORT", 00:29:23.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.800 "adrfam": "ipv4", 00:29:23.800 "trsvcid": "$NVMF_PORT", 00:29:23.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.800 "hdgst": ${hdgst:-false}, 00:29:23.800 "ddgst": ${ddgst:-false} 00:29:23.800 }, 00:29:23.800 "method": "bdev_nvme_attach_controller" 00:29:23.800 } 00:29:23.800 EOF 00:29:23.800 )") 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.800 { 00:29:23.800 "params": { 00:29:23.800 "name": "Nvme$subsystem", 00:29:23.800 "trtype": "$TEST_TRANSPORT", 00:29:23.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.800 "adrfam": "ipv4", 00:29:23.800 "trsvcid": "$NVMF_PORT", 00:29:23.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.800 "hdgst": ${hdgst:-false}, 00:29:23.800 "ddgst": ${ddgst:-false} 00:29:23.800 }, 00:29:23.800 "method": "bdev_nvme_attach_controller" 00:29:23.800 } 00:29:23.800 EOF 00:29:23.800 )") 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.800 { 00:29:23.800 "params": { 00:29:23.800 "name": "Nvme$subsystem", 00:29:23.800 "trtype": "$TEST_TRANSPORT", 00:29:23.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.800 "adrfam": "ipv4", 00:29:23.800 "trsvcid": "$NVMF_PORT", 00:29:23.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.800 "hdgst": ${hdgst:-false}, 00:29:23.800 "ddgst": ${ddgst:-false} 00:29:23.800 }, 00:29:23.800 "method": "bdev_nvme_attach_controller" 00:29:23.800 } 00:29:23.800 EOF 00:29:23.800 )") 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.800 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.800 { 00:29:23.800 "params": { 00:29:23.800 "name": "Nvme$subsystem", 00:29:23.800 "trtype": "$TEST_TRANSPORT", 00:29:23.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.800 "adrfam": "ipv4", 00:29:23.800 "trsvcid": "$NVMF_PORT", 00:29:23.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.800 "hdgst": ${hdgst:-false}, 00:29:23.801 "ddgst": ${ddgst:-false} 00:29:23.801 }, 00:29:23.801 "method": "bdev_nvme_attach_controller" 00:29:23.801 } 00:29:23.801 EOF 00:29:23.801 )") 00:29:23.801 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:23.801 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.801 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.801 { 00:29:23.801 "params": { 00:29:23.801 "name": "Nvme$subsystem", 00:29:23.801 "trtype": "$TEST_TRANSPORT", 00:29:23.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.801 "adrfam": "ipv4", 00:29:23.801 "trsvcid": "$NVMF_PORT", 00:29:23.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.801 "hdgst": ${hdgst:-false}, 00:29:23.801 "ddgst": ${ddgst:-false} 00:29:23.801 }, 00:29:23.801 "method": "bdev_nvme_attach_controller" 00:29:23.801 } 00:29:23.801 EOF 00:29:23.801 )") 00:29:23.801 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:23.801 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:23.801 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:23.801 { 00:29:23.801 "params": { 00:29:23.801 "name": "Nvme$subsystem", 00:29:23.801 "trtype": "$TEST_TRANSPORT", 00:29:23.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.801 "adrfam": "ipv4", 00:29:23.801 "trsvcid": "$NVMF_PORT", 00:29:23.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.801 "hdgst": ${hdgst:-false}, 00:29:23.801 "ddgst": ${ddgst:-false} 00:29:23.801 }, 00:29:23.801 "method": "bdev_nvme_attach_controller" 00:29:23.801 } 00:29:23.801 EOF 00:29:23.801 )") 00:29:23.801 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:29:23.801 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:29:23.801 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:29:23.801 22:12:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:23.801 "params": { 00:29:23.801 "name": "Nvme1", 00:29:23.801 "trtype": "tcp", 00:29:23.801 "traddr": "10.0.0.2", 00:29:23.801 "adrfam": "ipv4", 00:29:23.801 "trsvcid": "4420", 00:29:23.801 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:23.801 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:23.801 "hdgst": false, 00:29:23.801 "ddgst": false 00:29:23.801 }, 00:29:23.801 "method": "bdev_nvme_attach_controller" 00:29:23.801 },{ 00:29:23.801 "params": { 00:29:23.801 "name": "Nvme2", 00:29:23.801 "trtype": "tcp", 00:29:23.801 "traddr": "10.0.0.2", 00:29:23.801 "adrfam": "ipv4", 00:29:23.801 "trsvcid": "4420", 00:29:23.801 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:23.801 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:23.801 "hdgst": false, 00:29:23.801 "ddgst": false 00:29:23.801 }, 00:29:23.801 "method": "bdev_nvme_attach_controller" 00:29:23.801 },{ 00:29:23.801 "params": { 00:29:23.801 "name": "Nvme3", 00:29:23.801 "trtype": "tcp", 00:29:23.801 "traddr": "10.0.0.2", 00:29:23.801 "adrfam": "ipv4", 00:29:23.801 "trsvcid": "4420", 00:29:23.801 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:23.801 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:23.801 "hdgst": false, 00:29:23.801 "ddgst": false 00:29:23.801 }, 00:29:23.801 "method": "bdev_nvme_attach_controller" 00:29:23.801 },{ 00:29:23.801 "params": { 00:29:23.801 "name": "Nvme4", 00:29:23.801 "trtype": "tcp", 00:29:23.801 "traddr": "10.0.0.2", 00:29:23.801 "adrfam": "ipv4", 00:29:23.801 "trsvcid": "4420", 00:29:23.801 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:23.801 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:23.801 "hdgst": false, 00:29:23.801 "ddgst": false 00:29:23.801 }, 00:29:23.801 "method": "bdev_nvme_attach_controller" 00:29:23.801 },{ 00:29:23.801 "params": { 00:29:23.801 "name": "Nvme5", 00:29:23.801 "trtype": "tcp", 00:29:23.801 "traddr": "10.0.0.2", 00:29:23.801 "adrfam": "ipv4", 00:29:23.801 "trsvcid": "4420", 00:29:23.801 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:23.801 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:23.801 "hdgst": false, 00:29:23.801 "ddgst": false 00:29:23.801 }, 00:29:23.801 "method": "bdev_nvme_attach_controller" 00:29:23.801 },{ 00:29:23.801 "params": { 00:29:23.801 "name": "Nvme6", 00:29:23.801 "trtype": "tcp", 00:29:23.801 "traddr": "10.0.0.2", 00:29:23.801 "adrfam": "ipv4", 00:29:23.801 "trsvcid": "4420", 00:29:23.801 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:23.801 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:23.801 "hdgst": false, 00:29:23.801 "ddgst": false 00:29:23.801 }, 00:29:23.801 "method": "bdev_nvme_attach_controller" 00:29:23.801 },{ 00:29:23.801 "params": { 00:29:23.801 "name": "Nvme7", 00:29:23.801 "trtype": "tcp", 00:29:23.801 "traddr": "10.0.0.2", 00:29:23.801 "adrfam": "ipv4", 00:29:23.801 "trsvcid": "4420", 00:29:23.801 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:23.801 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:23.801 "hdgst": false, 00:29:23.801 "ddgst": false 00:29:23.801 }, 00:29:23.801 "method": "bdev_nvme_attach_controller" 00:29:23.801 },{ 00:29:23.801 "params": { 00:29:23.801 "name": "Nvme8", 00:29:23.801 "trtype": "tcp", 00:29:23.801 "traddr": "10.0.0.2", 00:29:23.801 "adrfam": "ipv4", 00:29:23.801 "trsvcid": "4420", 00:29:23.801 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:23.801 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:23.801 "hdgst": false, 00:29:23.801 "ddgst": false 00:29:23.801 }, 00:29:23.801 "method": "bdev_nvme_attach_controller" 00:29:23.801 },{ 00:29:23.801 "params": { 00:29:23.801 "name": "Nvme9", 00:29:23.801 "trtype": "tcp", 00:29:23.801 "traddr": "10.0.0.2", 00:29:23.801 "adrfam": "ipv4", 00:29:23.801 "trsvcid": "4420", 00:29:23.801 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:23.801 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:23.801 "hdgst": false, 00:29:23.801 "ddgst": false 00:29:23.801 }, 00:29:23.801 "method": "bdev_nvme_attach_controller" 00:29:23.801 },{ 00:29:23.801 "params": { 00:29:23.801 "name": "Nvme10", 00:29:23.801 "trtype": "tcp", 00:29:23.801 "traddr": "10.0.0.2", 00:29:23.801 "adrfam": "ipv4", 00:29:23.801 "trsvcid": "4420", 00:29:23.801 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:23.801 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:23.801 "hdgst": false, 00:29:23.801 "ddgst": false 00:29:23.801 }, 00:29:23.801 "method": "bdev_nvme_attach_controller" 00:29:23.801 }' 00:29:23.801 [2024-07-13 22:12:42.916628] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:23.801 [2024-07-13 22:12:42.916774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4165640 ] 00:29:23.801 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.801 [2024-07-13 22:12:43.044603] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.058 [2024-07-13 22:12:43.281816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.956 Running I/O for 1 seconds... 00:29:27.332 00:29:27.332 Latency(us) 00:29:27.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.332 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.332 Verification LBA range: start 0x0 length 0x400 00:29:27.332 Nvme1n1 : 1.22 210.52 13.16 0.00 0.00 300885.52 22039.51 284280.60 00:29:27.332 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.332 Verification LBA range: start 0x0 length 0x400 00:29:27.332 Nvme2n1 : 1.21 219.05 13.69 0.00 0.00 271237.40 50098.63 259425.47 00:29:27.332 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.332 Verification LBA range: start 0x0 length 0x400 00:29:27.332 Nvme3n1 : 1.23 208.40 13.02 0.00 0.00 293831.30 23398.78 306028.85 00:29:27.332 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.332 Verification LBA range: start 0x0 length 0x400 00:29:27.332 Nvme4n1 : 1.20 212.95 13.31 0.00 0.00 282287.98 27767.85 288940.94 00:29:27.332 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.332 Verification LBA range: start 0x0 length 0x400 00:29:27.332 Nvme5n1 : 1.09 176.46 11.03 0.00 0.00 332195.97 23495.87 299815.06 00:29:27.332 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.332 Verification LBA range: start 0x0 length 0x400 00:29:27.332 Nvme6n1 : 1.11 173.06 10.82 0.00 0.00 332586.10 25826.04 298261.62 00:29:27.332 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.332 Verification LBA range: start 0x0 length 0x400 00:29:27.332 Nvme7n1 : 1.25 205.27 12.83 0.00 0.00 279067.69 24369.68 299815.06 00:29:27.332 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.332 Verification LBA range: start 0x0 length 0x400 00:29:27.332 Nvme8n1 : 1.24 206.74 12.92 0.00 0.00 271886.98 23010.42 299815.06 00:29:27.332 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.332 Verification LBA range: start 0x0 length 0x400 00:29:27.332 Nvme9n1 : 1.25 204.39 12.77 0.00 0.00 270486.19 25049.32 310689.19 00:29:27.332 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:27.332 Verification LBA range: start 0x0 length 0x400 00:29:27.332 Nvme10n1 : 1.26 202.97 12.69 0.00 0.00 267704.13 23495.87 338651.21 00:29:27.333 =================================================================================================================== 00:29:27.333 Total : 2019.82 126.24 0.00 0.00 287942.28 22039.51 338651.21 00:29:28.267 22:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:29:28.267 22:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:28.267 22:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:28.267 22:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:28.267 22:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:28.267 22:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:28.267 22:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:29:28.267 22:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:28.267 22:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:29:28.267 22:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:28.267 22:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:28.267 rmmod nvme_tcp 00:29:28.267 rmmod nvme_fabrics 00:29:28.267 rmmod nvme_keyring 00:29:28.267 22:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:28.267 22:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:29:28.267 22:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:29:28.267 22:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 4164891 ']' 00:29:28.267 22:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 4164891 00:29:28.267 22:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 4164891 ']' 00:29:28.267 22:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 4164891 00:29:28.267 22:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:29:28.267 22:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:28.267 22:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4164891 00:29:28.267 22:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:28.267 22:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:28.267 22:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4164891' 00:29:28.267 killing process with pid 4164891 00:29:28.267 22:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 4164891 00:29:28.267 22:12:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 4164891 00:29:31.548 22:12:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:31.548 22:12:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:31.548 22:12:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:31.548 22:12:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:31.548 22:12:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:31.548 22:12:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.548 22:12:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:31.548 22:12:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:33.469 00:29:33.469 real 0m17.761s 00:29:33.469 user 0m57.286s 00:29:33.469 sys 0m3.895s 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:33.469 ************************************ 00:29:33.469 END TEST nvmf_shutdown_tc1 00:29:33.469 ************************************ 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:33.469 ************************************ 00:29:33.469 START TEST nvmf_shutdown_tc2 00:29:33.469 ************************************ 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:33.469 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:33.469 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:33.469 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:33.469 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:33.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:33.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:29:33.469 00:29:33.469 --- 10.0.0.2 ping statistics --- 00:29:33.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.469 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:33.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:33.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:29:33.469 00:29:33.469 --- 10.0.0.1 ping statistics --- 00:29:33.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.469 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=4166793 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 4166793 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 4166793 ']' 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:33.469 22:12:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:33.469 [2024-07-13 22:12:52.781533] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:33.469 [2024-07-13 22:12:52.781671] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.727 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.727 [2024-07-13 22:12:52.920582] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:33.985 [2024-07-13 22:12:53.177022] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.985 [2024-07-13 22:12:53.177094] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.985 [2024-07-13 22:12:53.177121] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:33.985 [2024-07-13 22:12:53.177142] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:33.985 [2024-07-13 22:12:53.177174] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.985 [2024-07-13 22:12:53.177305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:33.985 [2024-07-13 22:12:53.177419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:33.985 [2024-07-13 22:12:53.177462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.985 [2024-07-13 22:12:53.177472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.550 [2024-07-13 22:12:53.703290] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.550 22:12:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:34.550 Malloc1 00:29:34.550 [2024-07-13 22:12:53.844713] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.550 Malloc2 00:29:34.808 Malloc3 00:29:34.808 Malloc4 00:29:35.065 Malloc5 00:29:35.065 Malloc6 00:29:35.065 Malloc7 00:29:35.323 Malloc8 00:29:35.323 Malloc9 00:29:35.581 Malloc10 00:29:35.581 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.581 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:35.581 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:35.581 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:35.581 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=4167111 00:29:35.581 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 4167111 /var/tmp/bdevperf.sock 00:29:35.581 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 4167111 ']' 00:29:35.581 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:35.581 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:35.581 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:35.581 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:35.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:35.581 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:35.581 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:35.581 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:35.581 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:29:35.581 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:29:35.581 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.581 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.581 { 00:29:35.581 "params": { 00:29:35.581 "name": "Nvme$subsystem", 00:29:35.581 "trtype": "$TEST_TRANSPORT", 00:29:35.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.581 "adrfam": "ipv4", 00:29:35.581 "trsvcid": "$NVMF_PORT", 00:29:35.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.581 "hdgst": ${hdgst:-false}, 00:29:35.581 "ddgst": ${ddgst:-false} 00:29:35.581 }, 00:29:35.581 "method": "bdev_nvme_attach_controller" 00:29:35.581 } 00:29:35.581 EOF 00:29:35.581 )") 00:29:35.581 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:35.581 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.581 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.581 { 00:29:35.581 "params": { 00:29:35.581 "name": "Nvme$subsystem", 00:29:35.581 "trtype": "$TEST_TRANSPORT", 00:29:35.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.581 "adrfam": "ipv4", 00:29:35.581 "trsvcid": "$NVMF_PORT", 00:29:35.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.581 "hdgst": ${hdgst:-false}, 00:29:35.581 "ddgst": ${ddgst:-false} 00:29:35.581 }, 00:29:35.582 "method": "bdev_nvme_attach_controller" 00:29:35.582 } 00:29:35.582 EOF 00:29:35.582 )") 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.582 { 00:29:35.582 "params": { 00:29:35.582 "name": "Nvme$subsystem", 00:29:35.582 "trtype": "$TEST_TRANSPORT", 00:29:35.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.582 "adrfam": "ipv4", 00:29:35.582 "trsvcid": "$NVMF_PORT", 00:29:35.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.582 "hdgst": ${hdgst:-false}, 00:29:35.582 "ddgst": ${ddgst:-false} 00:29:35.582 }, 00:29:35.582 "method": "bdev_nvme_attach_controller" 00:29:35.582 } 00:29:35.582 EOF 00:29:35.582 )") 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.582 { 00:29:35.582 "params": { 00:29:35.582 "name": "Nvme$subsystem", 00:29:35.582 "trtype": "$TEST_TRANSPORT", 00:29:35.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.582 "adrfam": "ipv4", 00:29:35.582 "trsvcid": "$NVMF_PORT", 00:29:35.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.582 "hdgst": ${hdgst:-false}, 00:29:35.582 "ddgst": ${ddgst:-false} 00:29:35.582 }, 00:29:35.582 "method": "bdev_nvme_attach_controller" 00:29:35.582 } 00:29:35.582 EOF 00:29:35.582 )") 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.582 { 00:29:35.582 "params": { 00:29:35.582 "name": "Nvme$subsystem", 00:29:35.582 "trtype": "$TEST_TRANSPORT", 00:29:35.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.582 "adrfam": "ipv4", 00:29:35.582 "trsvcid": "$NVMF_PORT", 00:29:35.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.582 "hdgst": ${hdgst:-false}, 00:29:35.582 "ddgst": ${ddgst:-false} 00:29:35.582 }, 00:29:35.582 "method": "bdev_nvme_attach_controller" 00:29:35.582 } 00:29:35.582 EOF 00:29:35.582 )") 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.582 { 00:29:35.582 "params": { 00:29:35.582 "name": "Nvme$subsystem", 00:29:35.582 "trtype": "$TEST_TRANSPORT", 00:29:35.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.582 "adrfam": "ipv4", 00:29:35.582 "trsvcid": "$NVMF_PORT", 00:29:35.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.582 "hdgst": ${hdgst:-false}, 00:29:35.582 "ddgst": ${ddgst:-false} 00:29:35.582 }, 00:29:35.582 "method": "bdev_nvme_attach_controller" 00:29:35.582 } 00:29:35.582 EOF 00:29:35.582 )") 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.582 { 00:29:35.582 "params": { 00:29:35.582 "name": "Nvme$subsystem", 00:29:35.582 "trtype": "$TEST_TRANSPORT", 00:29:35.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.582 "adrfam": "ipv4", 00:29:35.582 "trsvcid": "$NVMF_PORT", 00:29:35.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.582 "hdgst": ${hdgst:-false}, 00:29:35.582 "ddgst": ${ddgst:-false} 00:29:35.582 }, 00:29:35.582 "method": "bdev_nvme_attach_controller" 00:29:35.582 } 00:29:35.582 EOF 00:29:35.582 )") 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.582 { 00:29:35.582 "params": { 00:29:35.582 "name": "Nvme$subsystem", 00:29:35.582 "trtype": "$TEST_TRANSPORT", 00:29:35.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.582 "adrfam": "ipv4", 00:29:35.582 "trsvcid": "$NVMF_PORT", 00:29:35.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.582 "hdgst": ${hdgst:-false}, 00:29:35.582 "ddgst": ${ddgst:-false} 00:29:35.582 }, 00:29:35.582 "method": "bdev_nvme_attach_controller" 00:29:35.582 } 00:29:35.582 EOF 00:29:35.582 )") 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.582 { 00:29:35.582 "params": { 00:29:35.582 "name": "Nvme$subsystem", 00:29:35.582 "trtype": "$TEST_TRANSPORT", 00:29:35.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.582 "adrfam": "ipv4", 00:29:35.582 "trsvcid": "$NVMF_PORT", 00:29:35.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.582 "hdgst": ${hdgst:-false}, 00:29:35.582 "ddgst": ${ddgst:-false} 00:29:35.582 }, 00:29:35.582 "method": "bdev_nvme_attach_controller" 00:29:35.582 } 00:29:35.582 EOF 00:29:35.582 )") 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.582 { 00:29:35.582 "params": { 00:29:35.582 "name": "Nvme$subsystem", 00:29:35.582 "trtype": "$TEST_TRANSPORT", 00:29:35.582 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.582 "adrfam": "ipv4", 00:29:35.582 "trsvcid": "$NVMF_PORT", 00:29:35.582 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.582 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.582 "hdgst": ${hdgst:-false}, 00:29:35.582 "ddgst": ${ddgst:-false} 00:29:35.582 }, 00:29:35.582 "method": "bdev_nvme_attach_controller" 00:29:35.582 } 00:29:35.582 EOF 00:29:35.582 )") 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:29:35.582 22:12:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:35.582 "params": { 00:29:35.582 "name": "Nvme1", 00:29:35.582 "trtype": "tcp", 00:29:35.582 "traddr": "10.0.0.2", 00:29:35.582 "adrfam": "ipv4", 00:29:35.582 "trsvcid": "4420", 00:29:35.582 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:35.582 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:35.582 "hdgst": false, 00:29:35.582 "ddgst": false 00:29:35.582 }, 00:29:35.582 "method": "bdev_nvme_attach_controller" 00:29:35.582 },{ 00:29:35.582 "params": { 00:29:35.582 "name": "Nvme2", 00:29:35.582 "trtype": "tcp", 00:29:35.582 "traddr": "10.0.0.2", 00:29:35.582 "adrfam": "ipv4", 00:29:35.582 "trsvcid": "4420", 00:29:35.582 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:35.582 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:35.582 "hdgst": false, 00:29:35.582 "ddgst": false 00:29:35.582 }, 00:29:35.582 "method": "bdev_nvme_attach_controller" 00:29:35.582 },{ 00:29:35.582 "params": { 00:29:35.582 "name": "Nvme3", 00:29:35.582 "trtype": "tcp", 00:29:35.582 "traddr": "10.0.0.2", 00:29:35.582 "adrfam": "ipv4", 00:29:35.582 "trsvcid": "4420", 00:29:35.582 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:35.582 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:35.582 "hdgst": false, 00:29:35.582 "ddgst": false 00:29:35.582 }, 00:29:35.582 "method": "bdev_nvme_attach_controller" 00:29:35.582 },{ 00:29:35.582 "params": { 00:29:35.582 "name": "Nvme4", 00:29:35.582 "trtype": "tcp", 00:29:35.582 "traddr": "10.0.0.2", 00:29:35.582 "adrfam": "ipv4", 00:29:35.582 "trsvcid": "4420", 00:29:35.582 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:35.582 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:35.582 "hdgst": false, 00:29:35.582 "ddgst": false 00:29:35.582 }, 00:29:35.582 "method": "bdev_nvme_attach_controller" 00:29:35.582 },{ 00:29:35.582 "params": { 00:29:35.582 "name": "Nvme5", 00:29:35.582 "trtype": "tcp", 00:29:35.582 "traddr": "10.0.0.2", 00:29:35.582 "adrfam": "ipv4", 00:29:35.582 "trsvcid": "4420", 00:29:35.582 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:35.582 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:35.582 "hdgst": false, 00:29:35.582 "ddgst": false 00:29:35.582 }, 00:29:35.582 "method": "bdev_nvme_attach_controller" 00:29:35.582 },{ 00:29:35.582 "params": { 00:29:35.582 "name": "Nvme6", 00:29:35.582 "trtype": "tcp", 00:29:35.582 "traddr": "10.0.0.2", 00:29:35.582 "adrfam": "ipv4", 00:29:35.582 "trsvcid": "4420", 00:29:35.582 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:35.583 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:35.583 "hdgst": false, 00:29:35.583 "ddgst": false 00:29:35.583 }, 00:29:35.583 "method": "bdev_nvme_attach_controller" 00:29:35.583 },{ 00:29:35.583 "params": { 00:29:35.583 "name": "Nvme7", 00:29:35.583 "trtype": "tcp", 00:29:35.583 "traddr": "10.0.0.2", 00:29:35.583 "adrfam": "ipv4", 00:29:35.583 "trsvcid": "4420", 00:29:35.583 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:35.583 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:35.583 "hdgst": false, 00:29:35.583 "ddgst": false 00:29:35.583 }, 00:29:35.583 "method": "bdev_nvme_attach_controller" 00:29:35.583 },{ 00:29:35.583 "params": { 00:29:35.583 "name": "Nvme8", 00:29:35.583 "trtype": "tcp", 00:29:35.583 "traddr": "10.0.0.2", 00:29:35.583 "adrfam": "ipv4", 00:29:35.583 "trsvcid": "4420", 00:29:35.583 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:35.583 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:35.583 "hdgst": false, 00:29:35.583 "ddgst": false 00:29:35.583 }, 00:29:35.583 "method": "bdev_nvme_attach_controller" 00:29:35.583 },{ 00:29:35.583 "params": { 00:29:35.583 "name": "Nvme9", 00:29:35.583 "trtype": "tcp", 00:29:35.583 "traddr": "10.0.0.2", 00:29:35.583 "adrfam": "ipv4", 00:29:35.583 "trsvcid": "4420", 00:29:35.583 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:35.583 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:35.583 "hdgst": false, 00:29:35.583 "ddgst": false 00:29:35.583 }, 00:29:35.583 "method": "bdev_nvme_attach_controller" 00:29:35.583 },{ 00:29:35.583 "params": { 00:29:35.583 "name": "Nvme10", 00:29:35.583 "trtype": "tcp", 00:29:35.583 "traddr": "10.0.0.2", 00:29:35.583 "adrfam": "ipv4", 00:29:35.583 "trsvcid": "4420", 00:29:35.583 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:35.583 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:35.583 "hdgst": false, 00:29:35.583 "ddgst": false 00:29:35.583 }, 00:29:35.583 "method": "bdev_nvme_attach_controller" 00:29:35.583 }' 00:29:35.583 [2024-07-13 22:12:54.856806] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:35.583 [2024-07-13 22:12:54.856979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4167111 ] 00:29:35.583 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.840 [2024-07-13 22:12:54.992586] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.840 [2024-07-13 22:12:55.229815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.761 Running I/O for 10 seconds... 00:29:38.340 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:38.340 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:38.340 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:38.340 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.340 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.340 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.340 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:38.340 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:38.340 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:29:38.340 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:29:38.340 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:29:38.340 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:29:38.340 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:38.340 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:38.340 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:38.340 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.340 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.340 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.340 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:29:38.340 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:29:38.340 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:38.599 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:38.599 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:38.599 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:38.599 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:38.599 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.599 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:38.599 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.599 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=139 00:29:38.599 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 139 -ge 100 ']' 00:29:38.599 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:29:38.599 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:29:38.599 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:29:38.599 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 4167111 00:29:38.599 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 4167111 ']' 00:29:38.599 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 4167111 00:29:38.599 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:29:38.599 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:38.599 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4167111 00:29:38.599 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:38.599 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:38.599 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4167111' 00:29:38.599 killing process with pid 4167111 00:29:38.599 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 4167111 00:29:38.599 22:12:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 4167111 00:29:38.857 Received shutdown signal, test time was about 1.051549 seconds 00:29:38.857 00:29:38.857 Latency(us) 00:29:38.857 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.857 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.857 Verification LBA range: start 0x0 length 0x400 00:29:38.857 Nvme1n1 : 1.02 249.89 15.62 0.00 0.00 253048.04 21456.97 276513.37 00:29:38.857 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.857 Verification LBA range: start 0x0 length 0x400 00:29:38.857 Nvme2n1 : 1.02 188.42 11.78 0.00 0.00 328028.41 25631.86 298261.62 00:29:38.857 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.857 Verification LBA range: start 0x0 length 0x400 00:29:38.857 Nvme3n1 : 1.01 189.32 11.83 0.00 0.00 320792.65 30098.01 302921.96 00:29:38.857 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.857 Verification LBA range: start 0x0 length 0x400 00:29:38.857 Nvme4n1 : 1.00 192.58 12.04 0.00 0.00 306921.37 26408.58 295154.73 00:29:38.857 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.857 Verification LBA range: start 0x0 length 0x400 00:29:38.857 Nvme5n1 : 0.98 196.74 12.30 0.00 0.00 294852.33 26408.58 327777.09 00:29:38.857 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.857 Verification LBA range: start 0x0 length 0x400 00:29:38.857 Nvme6n1 : 1.04 184.69 11.54 0.00 0.00 309346.61 28544.57 335544.32 00:29:38.857 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.857 Verification LBA range: start 0x0 length 0x400 00:29:38.857 Nvme7n1 : 1.03 186.74 11.67 0.00 0.00 298759.71 27379.48 304475.40 00:29:38.857 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.857 Verification LBA range: start 0x0 length 0x400 00:29:38.857 Nvme8n1 : 0.96 199.35 12.46 0.00 0.00 270582.27 26408.58 282727.16 00:29:38.857 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.857 Verification LBA range: start 0x0 length 0x400 00:29:38.857 Nvme9n1 : 1.00 192.94 12.06 0.00 0.00 275105.82 22816.24 299815.06 00:29:38.857 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.857 Verification LBA range: start 0x0 length 0x400 00:29:38.857 Nvme10n1 : 1.05 182.74 11.42 0.00 0.00 287035.54 27185.30 347971.89 00:29:38.857 =================================================================================================================== 00:29:38.857 Total : 1963.40 122.71 0.00 0.00 293111.82 21456.97 347971.89 00:29:39.790 22:12:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:29:41.162 22:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 4166793 00:29:41.162 22:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:29:41.162 22:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:41.162 22:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:41.162 22:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:41.162 22:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:41.162 22:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:41.162 22:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:29:41.162 22:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:41.163 22:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:29:41.163 22:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:41.163 22:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:41.163 rmmod nvme_tcp 00:29:41.163 rmmod nvme_fabrics 00:29:41.163 rmmod nvme_keyring 00:29:41.163 22:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:41.163 22:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:29:41.163 22:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:29:41.163 22:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 4166793 ']' 00:29:41.163 22:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 4166793 00:29:41.163 22:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 4166793 ']' 00:29:41.163 22:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 4166793 00:29:41.163 22:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:29:41.163 22:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:41.163 22:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4166793 00:29:41.163 22:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:41.163 22:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:41.163 22:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4166793' 00:29:41.163 killing process with pid 4166793 00:29:41.163 22:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 4166793 00:29:41.163 22:13:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 4166793 00:29:43.692 22:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:43.692 22:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:43.692 22:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:43.692 22:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:43.692 22:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:43.693 22:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.693 22:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:43.693 22:13:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:46.223 00:29:46.223 real 0m12.579s 00:29:46.223 user 0m41.347s 00:29:46.223 sys 0m2.069s 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:46.223 ************************************ 00:29:46.223 END TEST nvmf_shutdown_tc2 00:29:46.223 ************************************ 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:46.223 ************************************ 00:29:46.223 START TEST nvmf_shutdown_tc3 00:29:46.223 ************************************ 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:46.223 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:46.223 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:46.223 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:46.223 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:46.224 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:46.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:46.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:29:46.224 00:29:46.224 --- 10.0.0.2 ping statistics --- 00:29:46.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.224 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:46.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:46.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:29:46.224 00:29:46.224 --- 10.0.0.1 ping statistics --- 00:29:46.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.224 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=4168422 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 4168422 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 4168422 ']' 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:46.224 22:13:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:46.224 [2024-07-13 22:13:05.428242] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:46.224 [2024-07-13 22:13:05.428390] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:46.224 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.224 [2024-07-13 22:13:05.583168] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:46.482 [2024-07-13 22:13:05.844837] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:46.482 [2024-07-13 22:13:05.844925] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:46.482 [2024-07-13 22:13:05.844954] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:46.482 [2024-07-13 22:13:05.844975] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:46.482 [2024-07-13 22:13:05.844997] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:46.482 [2024-07-13 22:13:05.845140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:46.482 [2024-07-13 22:13:05.845226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:46.482 [2024-07-13 22:13:05.845265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:46.482 [2024-07-13 22:13:05.845276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:47.048 [2024-07-13 22:13:06.396360] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.048 22:13:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:47.306 Malloc1 00:29:47.306 [2024-07-13 22:13:06.537137] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:47.306 Malloc2 00:29:47.564 Malloc3 00:29:47.564 Malloc4 00:29:47.564 Malloc5 00:29:47.822 Malloc6 00:29:47.822 Malloc7 00:29:47.822 Malloc8 00:29:48.080 Malloc9 00:29:48.080 Malloc10 00:29:48.080 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.080 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:29:48.080 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:48.080 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:48.080 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=4168721 00:29:48.080 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 4168721 /var/tmp/bdevperf.sock 00:29:48.080 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 4168721 ']' 00:29:48.080 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:48.080 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:48.080 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:48.080 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:48.080 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:48.081 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:29:48.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:48.081 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:48.081 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:29:48.081 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:48.081 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:48.081 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:48.081 { 00:29:48.081 "params": { 00:29:48.081 "name": "Nvme$subsystem", 00:29:48.081 "trtype": "$TEST_TRANSPORT", 00:29:48.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.081 "adrfam": "ipv4", 00:29:48.081 "trsvcid": "$NVMF_PORT", 00:29:48.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.081 "hdgst": ${hdgst:-false}, 00:29:48.081 "ddgst": ${ddgst:-false} 00:29:48.081 }, 00:29:48.081 "method": "bdev_nvme_attach_controller" 00:29:48.081 } 00:29:48.081 EOF 00:29:48.081 )") 00:29:48.081 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:48.081 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:48.081 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:48.081 { 00:29:48.081 "params": { 00:29:48.081 "name": "Nvme$subsystem", 00:29:48.081 "trtype": "$TEST_TRANSPORT", 00:29:48.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.081 "adrfam": "ipv4", 00:29:48.081 "trsvcid": "$NVMF_PORT", 00:29:48.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.081 "hdgst": ${hdgst:-false}, 00:29:48.081 "ddgst": ${ddgst:-false} 00:29:48.081 }, 00:29:48.081 "method": "bdev_nvme_attach_controller" 00:29:48.081 } 00:29:48.081 EOF 00:29:48.081 )") 00:29:48.081 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:48.081 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:48.081 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:48.081 { 00:29:48.081 "params": { 00:29:48.081 "name": "Nvme$subsystem", 00:29:48.081 "trtype": "$TEST_TRANSPORT", 00:29:48.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.081 "adrfam": "ipv4", 00:29:48.081 "trsvcid": "$NVMF_PORT", 00:29:48.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.081 "hdgst": ${hdgst:-false}, 00:29:48.081 "ddgst": ${ddgst:-false} 00:29:48.081 }, 00:29:48.081 "method": "bdev_nvme_attach_controller" 00:29:48.081 } 00:29:48.081 EOF 00:29:48.081 )") 00:29:48.081 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:48.081 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:48.081 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:48.081 { 00:29:48.081 "params": { 00:29:48.081 "name": "Nvme$subsystem", 00:29:48.081 "trtype": "$TEST_TRANSPORT", 00:29:48.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.081 "adrfam": "ipv4", 00:29:48.081 "trsvcid": "$NVMF_PORT", 00:29:48.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.081 "hdgst": ${hdgst:-false}, 00:29:48.081 "ddgst": ${ddgst:-false} 00:29:48.081 }, 00:29:48.081 "method": "bdev_nvme_attach_controller" 00:29:48.081 } 00:29:48.081 EOF 00:29:48.081 )") 00:29:48.081 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:48.081 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:48.081 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:48.081 { 00:29:48.081 "params": { 00:29:48.081 "name": "Nvme$subsystem", 00:29:48.081 "trtype": "$TEST_TRANSPORT", 00:29:48.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.081 "adrfam": "ipv4", 00:29:48.081 "trsvcid": "$NVMF_PORT", 00:29:48.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.081 "hdgst": ${hdgst:-false}, 00:29:48.081 "ddgst": ${ddgst:-false} 00:29:48.081 }, 00:29:48.081 "method": "bdev_nvme_attach_controller" 00:29:48.081 } 00:29:48.081 EOF 00:29:48.081 )") 00:29:48.081 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:48.081 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:48.081 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:48.081 { 00:29:48.081 "params": { 00:29:48.081 "name": "Nvme$subsystem", 00:29:48.081 "trtype": "$TEST_TRANSPORT", 00:29:48.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.081 "adrfam": "ipv4", 00:29:48.081 "trsvcid": "$NVMF_PORT", 00:29:48.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.081 "hdgst": ${hdgst:-false}, 00:29:48.081 "ddgst": ${ddgst:-false} 00:29:48.081 }, 00:29:48.081 "method": "bdev_nvme_attach_controller" 00:29:48.081 } 00:29:48.081 EOF 00:29:48.081 )") 00:29:48.081 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:48.081 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:48.081 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:48.081 { 00:29:48.081 "params": { 00:29:48.081 "name": "Nvme$subsystem", 00:29:48.081 "trtype": "$TEST_TRANSPORT", 00:29:48.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.081 "adrfam": "ipv4", 00:29:48.081 "trsvcid": "$NVMF_PORT", 00:29:48.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.081 "hdgst": ${hdgst:-false}, 00:29:48.081 "ddgst": ${ddgst:-false} 00:29:48.081 }, 00:29:48.081 "method": "bdev_nvme_attach_controller" 00:29:48.081 } 00:29:48.081 EOF 00:29:48.081 )") 00:29:48.081 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:48.081 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:48.081 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:48.081 { 00:29:48.081 "params": { 00:29:48.081 "name": "Nvme$subsystem", 00:29:48.081 "trtype": "$TEST_TRANSPORT", 00:29:48.081 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.081 "adrfam": "ipv4", 00:29:48.081 "trsvcid": "$NVMF_PORT", 00:29:48.081 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.081 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.081 "hdgst": ${hdgst:-false}, 00:29:48.081 "ddgst": ${ddgst:-false} 00:29:48.081 }, 00:29:48.081 "method": "bdev_nvme_attach_controller" 00:29:48.081 } 00:29:48.081 EOF 00:29:48.081 )") 00:29:48.340 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:48.340 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:48.340 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:48.340 { 00:29:48.340 "params": { 00:29:48.340 "name": "Nvme$subsystem", 00:29:48.340 "trtype": "$TEST_TRANSPORT", 00:29:48.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.340 "adrfam": "ipv4", 00:29:48.340 "trsvcid": "$NVMF_PORT", 00:29:48.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.340 "hdgst": ${hdgst:-false}, 00:29:48.340 "ddgst": ${ddgst:-false} 00:29:48.340 }, 00:29:48.340 "method": "bdev_nvme_attach_controller" 00:29:48.340 } 00:29:48.340 EOF 00:29:48.340 )") 00:29:48.340 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:48.340 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:48.340 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:48.340 { 00:29:48.340 "params": { 00:29:48.340 "name": "Nvme$subsystem", 00:29:48.340 "trtype": "$TEST_TRANSPORT", 00:29:48.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.340 "adrfam": "ipv4", 00:29:48.340 "trsvcid": "$NVMF_PORT", 00:29:48.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.340 "hdgst": ${hdgst:-false}, 00:29:48.340 "ddgst": ${ddgst:-false} 00:29:48.340 }, 00:29:48.340 "method": "bdev_nvme_attach_controller" 00:29:48.340 } 00:29:48.340 EOF 00:29:48.340 )") 00:29:48.340 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:29:48.340 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:29:48.340 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:29:48.340 22:13:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:48.340 "params": { 00:29:48.340 "name": "Nvme1", 00:29:48.340 "trtype": "tcp", 00:29:48.341 "traddr": "10.0.0.2", 00:29:48.341 "adrfam": "ipv4", 00:29:48.341 "trsvcid": "4420", 00:29:48.341 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:48.341 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:48.341 "hdgst": false, 00:29:48.341 "ddgst": false 00:29:48.341 }, 00:29:48.341 "method": "bdev_nvme_attach_controller" 00:29:48.341 },{ 00:29:48.341 "params": { 00:29:48.341 "name": "Nvme2", 00:29:48.341 "trtype": "tcp", 00:29:48.341 "traddr": "10.0.0.2", 00:29:48.341 "adrfam": "ipv4", 00:29:48.341 "trsvcid": "4420", 00:29:48.341 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:48.341 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:48.341 "hdgst": false, 00:29:48.341 "ddgst": false 00:29:48.341 }, 00:29:48.341 "method": "bdev_nvme_attach_controller" 00:29:48.341 },{ 00:29:48.341 "params": { 00:29:48.341 "name": "Nvme3", 00:29:48.341 "trtype": "tcp", 00:29:48.341 "traddr": "10.0.0.2", 00:29:48.341 "adrfam": "ipv4", 00:29:48.341 "trsvcid": "4420", 00:29:48.341 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:48.341 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:48.341 "hdgst": false, 00:29:48.341 "ddgst": false 00:29:48.341 }, 00:29:48.341 "method": "bdev_nvme_attach_controller" 00:29:48.341 },{ 00:29:48.341 "params": { 00:29:48.341 "name": "Nvme4", 00:29:48.341 "trtype": "tcp", 00:29:48.341 "traddr": "10.0.0.2", 00:29:48.341 "adrfam": "ipv4", 00:29:48.341 "trsvcid": "4420", 00:29:48.341 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:48.341 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:48.341 "hdgst": false, 00:29:48.341 "ddgst": false 00:29:48.341 }, 00:29:48.341 "method": "bdev_nvme_attach_controller" 00:29:48.341 },{ 00:29:48.341 "params": { 00:29:48.341 "name": "Nvme5", 00:29:48.341 "trtype": "tcp", 00:29:48.341 "traddr": "10.0.0.2", 00:29:48.341 "adrfam": "ipv4", 00:29:48.341 "trsvcid": "4420", 00:29:48.341 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:48.341 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:48.341 "hdgst": false, 00:29:48.341 "ddgst": false 00:29:48.341 }, 00:29:48.341 "method": "bdev_nvme_attach_controller" 00:29:48.341 },{ 00:29:48.341 "params": { 00:29:48.341 "name": "Nvme6", 00:29:48.341 "trtype": "tcp", 00:29:48.341 "traddr": "10.0.0.2", 00:29:48.341 "adrfam": "ipv4", 00:29:48.341 "trsvcid": "4420", 00:29:48.341 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:48.341 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:48.341 "hdgst": false, 00:29:48.341 "ddgst": false 00:29:48.341 }, 00:29:48.341 "method": "bdev_nvme_attach_controller" 00:29:48.341 },{ 00:29:48.341 "params": { 00:29:48.341 "name": "Nvme7", 00:29:48.341 "trtype": "tcp", 00:29:48.341 "traddr": "10.0.0.2", 00:29:48.341 "adrfam": "ipv4", 00:29:48.341 "trsvcid": "4420", 00:29:48.341 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:48.341 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:48.341 "hdgst": false, 00:29:48.341 "ddgst": false 00:29:48.341 }, 00:29:48.341 "method": "bdev_nvme_attach_controller" 00:29:48.341 },{ 00:29:48.341 "params": { 00:29:48.341 "name": "Nvme8", 00:29:48.341 "trtype": "tcp", 00:29:48.341 "traddr": "10.0.0.2", 00:29:48.341 "adrfam": "ipv4", 00:29:48.341 "trsvcid": "4420", 00:29:48.341 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:48.341 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:48.341 "hdgst": false, 00:29:48.341 "ddgst": false 00:29:48.341 }, 00:29:48.341 "method": "bdev_nvme_attach_controller" 00:29:48.341 },{ 00:29:48.341 "params": { 00:29:48.341 "name": "Nvme9", 00:29:48.341 "trtype": "tcp", 00:29:48.341 "traddr": "10.0.0.2", 00:29:48.341 "adrfam": "ipv4", 00:29:48.341 "trsvcid": "4420", 00:29:48.341 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:48.341 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:48.341 "hdgst": false, 00:29:48.341 "ddgst": false 00:29:48.341 }, 00:29:48.341 "method": "bdev_nvme_attach_controller" 00:29:48.341 },{ 00:29:48.341 "params": { 00:29:48.341 "name": "Nvme10", 00:29:48.341 "trtype": "tcp", 00:29:48.341 "traddr": "10.0.0.2", 00:29:48.341 "adrfam": "ipv4", 00:29:48.341 "trsvcid": "4420", 00:29:48.341 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:48.341 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:48.341 "hdgst": false, 00:29:48.341 "ddgst": false 00:29:48.341 }, 00:29:48.341 "method": "bdev_nvme_attach_controller" 00:29:48.341 }' 00:29:48.341 [2024-07-13 22:13:07.531931] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:48.341 [2024-07-13 22:13:07.532086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4168721 ] 00:29:48.341 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.341 [2024-07-13 22:13:07.665698] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.599 [2024-07-13 22:13:07.905633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.152 Running I/O for 10 seconds... 00:29:51.152 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:51.152 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:29:51.152 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:51.152 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.152 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:51.152 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.152 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:51.152 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:51.152 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:51.152 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:29:51.152 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:29:51.152 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:29:51.152 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:29:51.152 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:51.152 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:51.152 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:51.152 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.152 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:51.152 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.152 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:29:51.152 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:29:51.152 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:51.410 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:51.410 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:51.410 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:51.410 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:51.410 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.410 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:51.410 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.410 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:29:51.410 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:29:51.410 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:29:51.683 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:29:51.683 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:29:51.683 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:51.683 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:29:51.683 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.683 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:51.683 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.683 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:29:51.684 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:29:51.684 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:29:51.684 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:29:51.684 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:29:51.684 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 4168422 00:29:51.684 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 4168422 ']' 00:29:51.684 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 4168422 00:29:51.684 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:29:51.684 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:51.684 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4168422 00:29:51.684 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:51.684 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:51.684 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4168422' 00:29:51.684 killing process with pid 4168422 00:29:51.684 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 4168422 00:29:51.684 22:13:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 4168422 00:29:51.684 [2024-07-13 22:13:10.968378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.968993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.969729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a080 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.972597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.972638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.972677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.972696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c480 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.975337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.975370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.975406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.975424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.975441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.975459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.975477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.975494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.684 [2024-07-13 22:13:10.975512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.975529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.975547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.975564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.975582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.975606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.975626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.975643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.975660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.975677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.975695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.975713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.975730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.975747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.975764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.975785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.975802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.975821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.975839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.975862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.975901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.975921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.975938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.975956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.975972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.975990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.976495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a480 is same with the state(5) to be set 00:29:51.685 [2024-07-13 22:13:10.978760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-13 22:13:10.978819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-13 22:13:10.978887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-13 22:13:10.978915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-13 22:13:10.978942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-13 22:13:10.978964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-13 22:13:10.978989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-13 22:13:10.979011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-13 22:13:10.979035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-13 22:13:10.979057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-13 22:13:10.979080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-13 22:13:10.979101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-13 22:13:10.979125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-13 22:13:10.979147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-13 22:13:10.979181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-13 22:13:10.979203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-13 22:13:10.979227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-13 22:13:10.979248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-13 22:13:10.979272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-13 22:13:10.979294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-13 22:13:10.979317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-13 22:13:10.979338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-13 22:13:10.979361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-13 22:13:10.979383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.685 [2024-07-13 22:13:10.979407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.685 [2024-07-13 22:13:10.979428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-13 22:13:10.979452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-13 22:13:10.979478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-13 22:13:10.979502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-13 22:13:10.979523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-13 22:13:10.979547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-13 22:13:10.979569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-13 22:13:10.979592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-13 22:13:10.979614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-13 22:13:10.979657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-13 22:13:10.979680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-13 22:13:10.979703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-13 22:13:10.979725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-13 22:13:10.979748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-13 22:13:10.979769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-13 22:13:10.979792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-13 22:13:10.979814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-13 22:13:10.979837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-13 22:13:10.979872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-13 22:13:10.979899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-13 22:13:10.979921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-13 22:13:10.979944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-13 22:13:10.979965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-13 22:13:10.979989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-13 22:13:10.980010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-13 22:13:10.980034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-13 22:13:10.980055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-13 22:13:10.980083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-13 22:13:10.980105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-13 22:13:10.980128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-13 22:13:10.980150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-13 22:13:10.980183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-13 22:13:10.980205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-13 22:13:10.980228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-13 22:13:10.980249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-13 22:13:10.980273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-13 22:13:10.980294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-13 22:13:10.980318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-13 22:13:10.980339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-13 22:13:10.980362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-13 22:13:10.980383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-13 22:13:10.980406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-13 22:13:10.980427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-13 22:13:10.980450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-13 22:13:10.980471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-13 22:13:10.980494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-13 22:13:10.980490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.686 [2024-07-13 22:13:10.980516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-13 22:13:10.980538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same [2024-07-13 22:13:10.980540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:1with the state(5) to be set 00:29:51.686 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-13 22:13:10.980562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same [2024-07-13 22:13:10.980563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:29:51.686 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.686 [2024-07-13 22:13:10.980585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.686 [2024-07-13 22:13:10.980593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.686 [2024-07-13 22:13:10.980604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.686 [2024-07-13 22:13:10.980615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-13 22:13:10.980622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.980640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same [2024-07-13 22:13:10.980639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:1with the state(5) to be set 00:29:51.687 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.687 [2024-07-13 22:13:10.980660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.980663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-13 22:13:10.980677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.980686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.687 [2024-07-13 22:13:10.980695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.980709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-13 22:13:10.980713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.980731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.980732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.687 [2024-07-13 22:13:10.980749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.980754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-13 22:13:10.980766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.980778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.687 [2024-07-13 22:13:10.980784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.980800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 22:13:10.980802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.980821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.980826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.687 [2024-07-13 22:13:10.980838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same [2024-07-13 22:13:10.980847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:29:51.687 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-13 22:13:10.980891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.980897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.687 [2024-07-13 22:13:10.980913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.980920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-13 22:13:10.980931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.980944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128[2024-07-13 22:13:10.980949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.687 with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.980968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same [2024-07-13 22:13:10.980968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:29:51.687 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-13 22:13:10.980987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.980994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.687 [2024-07-13 22:13:10.981005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.981015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-13 22:13:10.981022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.981040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same [2024-07-13 22:13:10.981039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128with the state(5) to be set 00:29:51.687 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.687 [2024-07-13 22:13:10.981060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.981063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-13 22:13:10.981078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.981087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.687 [2024-07-13 22:13:10.981095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.981109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-13 22:13:10.981113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.981131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.981133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.687 [2024-07-13 22:13:10.981152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.981157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-13 22:13:10.981182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.981189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.687 [2024-07-13 22:13:10.981201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.981210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-13 22:13:10.981220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.981234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128[2024-07-13 22:13:10.981238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.687 with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.981256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same [2024-07-13 22:13:10.981257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:29:51.687 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-13 22:13:10.981275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.981282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.687 [2024-07-13 22:13:10.981294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.981304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-13 22:13:10.981312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.981327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:12[2024-07-13 22:13:10.981330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.687 with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.981350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same [2024-07-13 22:13:10.981351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:29:51.687 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-13 22:13:10.981369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.981375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.687 [2024-07-13 22:13:10.981386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.981398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-13 22:13:10.981404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.981422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.687 [2024-07-13 22:13:10.981428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.981444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-13 22:13:10.981448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.981476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.981476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.687 [2024-07-13 22:13:10.981493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.981499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.687 [2024-07-13 22:13:10.981511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.687 [2024-07-13 22:13:10.981523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.687 [2024-07-13 22:13:10.981529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.981545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 22:13:10.981547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.981566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.981570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.688 [2024-07-13 22:13:10.981583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.981592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-13 22:13:10.981600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.981618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same [2024-07-13 22:13:10.981617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:12with the state(5) to be set 00:29:51.688 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.688 [2024-07-13 22:13:10.981638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.981640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-13 22:13:10.981655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.981664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.688 [2024-07-13 22:13:10.981672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.981685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 22:13:10.981690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.981709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.981715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.688 [2024-07-13 22:13:10.981727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000a880 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.981738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-13 22:13:10.981762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.688 [2024-07-13 22:13:10.981783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-13 22:13:10.981807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.688 [2024-07-13 22:13:10.981828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-13 22:13:10.981852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.688 [2024-07-13 22:13:10.981882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.688 [2024-07-13 22:13:10.981951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:51.688 [2024-07-13 22:13:10.984180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.688 [2024-07-13 22:13:10.984719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.984735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.984753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.984769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.984786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.984803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.984820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.984838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.984860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.984895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.984915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.984933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.984955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.984973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.984991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.985008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.985025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.985041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.985058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.985074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.985091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.985108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.985125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.985143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.985195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.985213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.985231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.985248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.985265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.985282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.985300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.985317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.985335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.985352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.985369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000ac80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.985628] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f9d00 was disconnected and freed. reset controller. 00:29:51.689 [2024-07-13 22:13:10.985779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.689 [2024-07-13 22:13:10.985811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-13 22:13:10.985841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.689 [2024-07-13 22:13:10.985882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-13 22:13:10.985905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.689 [2024-07-13 22:13:10.985925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-13 22:13:10.985947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.689 [2024-07-13 22:13:10.985974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-13 22:13:10.985994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6880 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.986092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.689 [2024-07-13 22:13:10.986120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-13 22:13:10.986142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.689 [2024-07-13 22:13:10.986174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-13 22:13:10.986194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.689 [2024-07-13 22:13:10.986214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-13 22:13:10.986234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.689 [2024-07-13 22:13:10.986254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-13 22:13:10.986273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3b80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.986349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.689 [2024-07-13 22:13:10.986376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-13 22:13:10.986397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.689 [2024-07-13 22:13:10.986417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-13 22:13:10.986447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.689 [2024-07-13 22:13:10.986467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-13 22:13:10.986487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.689 [2024-07-13 22:13:10.986507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-13 22:13:10.986525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3400 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.986588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.689 [2024-07-13 22:13:10.986619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-13 22:13:10.986641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.689 [2024-07-13 22:13:10.986661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-13 22:13:10.986682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.689 [2024-07-13 22:13:10.986715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-13 22:13:10.986737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.689 [2024-07-13 22:13:10.986758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-13 22:13:10.986776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.986840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.689 [2024-07-13 22:13:10.986881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-13 22:13:10.986905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.689 [2024-07-13 22:13:10.986925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-13 22:13:10.986947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.689 [2024-07-13 22:13:10.986967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-13 22:13:10.986987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.689 [2024-07-13 22:13:10.987006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.689 [2024-07-13 22:13:10.987025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.987022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.987059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.987079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.987098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.987115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.987133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.689 [2024-07-13 22:13:10.987150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987397] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.987996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.988013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.988030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.988047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.988065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.988082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.988099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.988116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.988133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.988157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.988179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.988196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b080 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.990586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:29:51.690 [2024-07-13 22:13:10.990626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.990649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6880 (9): Bad file descriptor 00:29:51.690 [2024-07-13 22:13:10.990660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.990681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.990707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.990727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.990744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.990761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.990778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.990795] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.990812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.990830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.990847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.990892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.990912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.990930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.990947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.990964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.990982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.990999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.991017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.991034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.991051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.991074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.991106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.991129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.991147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.991192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.991213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.991238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.991256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.690 [2024-07-13 22:13:10.991273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.991914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000b480 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.994283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-07-13 22:13:10.994330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6880 with addr=10.0.0.2, port=4420 00:29:51.691 [2024-07-13 22:13:10.994358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6880 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.995193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6880 (9): Bad file descriptor 00:29:51.691 [2024-07-13 22:13:10.995319] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:51.691 [2024-07-13 22:13:10.995419] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:51.691 [2024-07-13 22:13:10.995502] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:51.691 [2024-07-13 22:13:10.996111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:29:51.691 [2024-07-13 22:13:10.996144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:29:51.691 [2024-07-13 22:13:10.996179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:29:51.691 [2024-07-13 22:13:10.996291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.691 [2024-07-13 22:13:10.996322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-13 22:13:10.996346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.691 [2024-07-13 22:13:10.996373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-13 22:13:10.996394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.691 [2024-07-13 22:13:10.996416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-13 22:13:10.996437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.691 [2024-07-13 22:13:10.996458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-13 22:13:10.996476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.996544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.691 [2024-07-13 22:13:10.996570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-13 22:13:10.996593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.691 [2024-07-13 22:13:10.996613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-13 22:13:10.996635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.691 [2024-07-13 22:13:10.996655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-13 22:13:10.996677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.691 [2024-07-13 22:13:10.996697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-13 22:13:10.996716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4a80 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.996758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3b80 (9): Bad file descriptor 00:29:51.691 [2024-07-13 22:13:10.996815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3400 (9): Bad file descriptor 00:29:51.691 [2024-07-13 22:13:10.996876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:51.691 [2024-07-13 22:13:10.996919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2c80 (9): Bad file descriptor 00:29:51.691 [2024-07-13 22:13:10.996987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.691 [2024-07-13 22:13:10.997016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-13 22:13:10.997038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.691 [2024-07-13 22:13:10.997058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-13 22:13:10.997079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.691 [2024-07-13 22:13:10.997099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-13 22:13:10.997119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.691 [2024-07-13 22:13:10.997144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-13 22:13:10.997173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5200 is same with the state(5) to be set 00:29:51.691 [2024-07-13 22:13:10.997395] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:51.691 [2024-07-13 22:13:10.997490] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:51.691 [2024-07-13 22:13:10.997671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.691 [2024-07-13 22:13:10.997703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-13 22:13:10.997748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.691 [2024-07-13 22:13:10.997776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-13 22:13:10.997802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.691 [2024-07-13 22:13:10.997825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-13 22:13:10.997858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.691 [2024-07-13 22:13:10.997889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-13 22:13:10.997915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.691 [2024-07-13 22:13:10.997936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-13 22:13:10.997960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.691 [2024-07-13 22:13:10.997981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.691 [2024-07-13 22:13:10.998005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.691 [2024-07-13 22:13:10.998027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-13 22:13:10.998051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-13 22:13:10.998071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-13 22:13:10.998094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-13 22:13:10.998115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-13 22:13:10.998138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-13 22:13:10.998164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-13 22:13:10.998187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-13 22:13:10.998207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-13 22:13:10.998244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-13 22:13:10.998267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-13 22:13:10.998290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-13 22:13:10.998316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-13 22:13:10.998339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-13 22:13:10.998360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-13 22:13:10.998383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-13 22:13:10.998404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-13 22:13:10.998401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-13 22:13:10.998439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-13 22:13:10.998460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-13 22:13:10.998478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-13 22:13:10.998496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-13 22:13:10.998531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-13 22:13:10.998549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-13 22:13:10.998568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 22:13:10.998586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-13 22:13:10.998626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-13 22:13:10.998644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-13 22:13:10.998680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-13 22:13:10.998697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-13 22:13:10.998714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-13 22:13:10.998732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-13 22:13:10.998768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-13 22:13:10.998786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-13 22:13:10.998803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 22:13:10.998821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-13 22:13:10.998860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-13 22:13:10.998892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-13 22:13:10.998911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 22:13:10.998929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 [2024-07-13 22:13:10.998965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.692 [2024-07-13 22:13:10.998983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.998997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:1[2024-07-13 22:13:10.999000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.692 with the state(5) to be set 00:29:51.692 [2024-07-13 22:13:10.999019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-13 22:13:10.999020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:29:51.692 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.694 [2024-07-13 22:13:10.999038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.694 [2024-07-13 22:13:10.999056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.694 [2024-07-13 22:13:10.999076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:1[2024-07-13 22:13:10.999094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.694 with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-13 22:13:10.999115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:29:51.694 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.694 [2024-07-13 22:13:10.999133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.694 [2024-07-13 22:13:10.999156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.694 [2024-07-13 22:13:10.999183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.694 [2024-07-13 22:13:10.999201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 22:13:10.999219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.694 with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.694 [2024-07-13 22:13:10.999256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.694 [2024-07-13 22:13:10.999274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:1[2024-07-13 22:13:10.999292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.694 with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same [2024-07-13 22:13:10.999312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:29:51.694 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.694 [2024-07-13 22:13:10.999331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.694 [2024-07-13 22:13:10.999350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.694 [2024-07-13 22:13:10.999386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.694 [2024-07-13 22:13:10.999405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.694 [2024-07-13 22:13:10.999427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.694 [2024-07-13 22:13:10.999465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.694 [2024-07-13 22:13:10.999483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.694 [2024-07-13 22:13:10.999501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 22:13:10.999519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.694 with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.694 [2024-07-13 22:13:10.999557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.694 [2024-07-13 22:13:10.999575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:1[2024-07-13 22:13:10.999595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000bc80 is same 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.694 with the state(5) to be set 00:29:51.694 [2024-07-13 22:13:10.999627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.694 [2024-07-13 22:13:10.999651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.694 [2024-07-13 22:13:10.999672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.694 [2024-07-13 22:13:10.999696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.694 [2024-07-13 22:13:10.999718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.694 [2024-07-13 22:13:10.999742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.694 [2024-07-13 22:13:10.999763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.694 [2024-07-13 22:13:10.999787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.694 [2024-07-13 22:13:10.999813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.694 [2024-07-13 22:13:10.999837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.694 [2024-07-13 22:13:10.999877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.694 [2024-07-13 22:13:10.999903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.694 [2024-07-13 22:13:10.999925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.694 [2024-07-13 22:13:10.999950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.694 [2024-07-13 22:13:10.999971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.694 [2024-07-13 22:13:10.999996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.694 [2024-07-13 22:13:11.000017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.694 [2024-07-13 22:13:11.000041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.694 [2024-07-13 22:13:11.000062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.694 [2024-07-13 22:13:11.000086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.694 [2024-07-13 22:13:11.000108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.694 [2024-07-13 22:13:11.000131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.694 [2024-07-13 22:13:11.000153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.694 [2024-07-13 22:13:11.000188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.695 [2024-07-13 22:13:11.000209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.695 [2024-07-13 22:13:11.000233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.695 [2024-07-13 22:13:11.000254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.695 [2024-07-13 22:13:11.000278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.695 [2024-07-13 22:13:11.000300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.695 [2024-07-13 22:13:11.000324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.695 [2024-07-13 22:13:11.000345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.695 [2024-07-13 22:13:11.000370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.695 [2024-07-13 22:13:11.000391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.695 [2024-07-13 22:13:11.000419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.695 [2024-07-13 22:13:11.000441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.695 [2024-07-13 22:13:11.000465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.695 [2024-07-13 22:13:11.000487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.695 [2024-07-13 22:13:11.000510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.695 [2024-07-13 22:13:11.000532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.695 [2024-07-13 22:13:11.000556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.695 [2024-07-13 22:13:11.000577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.695 [2024-07-13 22:13:11.000601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.695 [2024-07-13 22:13:11.000622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.695 [2024-07-13 22:13:11.000646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.695 [2024-07-13 22:13:11.000667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.695 [2024-07-13 22:13:11.000690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.695 [2024-07-13 22:13:11.000711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.695 [2024-07-13 22:13:11.000735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.695 [2024-07-13 22:13:11.000756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.695 [2024-07-13 22:13:11.001074] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f9580 was disconnected and freed. reset controller. 00:29:51.695 [2024-07-13 22:13:11.001069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.695 [2024-07-13 22:13:11.001301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same [2024-07-13 22:13:11.001444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128with the state(5) to be set 00:29:51.695 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.695 [2024-07-13 22:13:11.001472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.695 [2024-07-13 22:13:11.001493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.695 [2024-07-13 22:13:11.001533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.695 [2024-07-13 22:13:11.001552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128[2024-07-13 22:13:11.001570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.695 with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.695 [2024-07-13 22:13:11.001593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.695 [2024-07-13 22:13:11.001630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.695 [2024-07-13 22:13:11.001648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.695 [2024-07-13 22:13:11.001683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.695 [2024-07-13 22:13:11.001701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128[2024-07-13 22:13:11.001718] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.695 with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same [2024-07-13 22:13:11.001739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:29:51.695 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.695 [2024-07-13 22:13:11.001757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.695 [2024-07-13 22:13:11.001786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.695 [2024-07-13 22:13:11.001780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.695 [2024-07-13 22:13:11.001818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.695 [2024-07-13 22:13:11.001844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.695 [2024-07-13 22:13:11.001877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.695 [2024-07-13 22:13:11.001905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.695 [2024-07-13 22:13:11.001924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same [2024-07-13 22:13:11.001926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128with the state(5) to be set 00:29:51.695 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.695 [2024-07-13 22:13:11.001947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same [2024-07-13 22:13:11.001950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:29:51.695 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.695 [2024-07-13 22:13:11.001972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.696 [2024-07-13 22:13:11.001981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.001990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.696 [2024-07-13 22:13:11.002003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.002008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.696 [2024-07-13 22:13:11.002025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.696 [2024-07-13 22:13:11.002027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.002042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.696 [2024-07-13 22:13:11.002049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.002060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.696 [2024-07-13 22:13:11.002074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.002078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.696 [2024-07-13 22:13:11.002096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 22:13:11.002097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 with the state(5) to be set 00:29:51.696 [2024-07-13 22:13:11.002122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.002123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.696 [2024-07-13 22:13:11.002143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.002170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.696 [2024-07-13 22:13:11.002178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.002194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.696 [2024-07-13 22:13:11.002200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.002212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.696 [2024-07-13 22:13:11.002236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:12[2024-07-13 22:13:11.002239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 with the state(5) to be set 00:29:51.696 [2024-07-13 22:13:11.002259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same [2024-07-13 22:13:11.002260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cwith the state(5) to be set 00:29:51.696 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.002281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.696 [2024-07-13 22:13:11.002289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.002300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.696 [2024-07-13 22:13:11.002312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.002324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.696 [2024-07-13 22:13:11.002336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.002342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.696 [2024-07-13 22:13:11.002357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 22:13:11.002359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 with the state(5) to be set 00:29:51.696 [2024-07-13 22:13:11.002378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.696 [2024-07-13 22:13:11.002385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.002395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.696 [2024-07-13 22:13:11.002407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.002412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.696 [2024-07-13 22:13:11.002429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.696 [2024-07-13 22:13:11.002432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.002446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61800000c080 is same with the state(5) to be set 00:29:51.696 [2024-07-13 22:13:11.002454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.002482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.002505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.002529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.002551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.002575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.002596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.002620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.002645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.002670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.002692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.002720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.002742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.002767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.002788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.002812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.002833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.002873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.002897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.002921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.002943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.002966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.002988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.003012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.003033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.003057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.003078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.003102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.003124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.003163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.003190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.003214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.003236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.003264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.003293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.003319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.003341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.003366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.003387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.003411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.003432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.003456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.696 [2024-07-13 22:13:11.003486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.696 [2024-07-13 22:13:11.003510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.003542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.003567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.003588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.003621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.003642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.003667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.003688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.003711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.003732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.003756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.003777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.003801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.003822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.003846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.003888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.003915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.003937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.003961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.003983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.004006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.004028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.004052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.004074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.004098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.004118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.004142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.004163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.004195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.004217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.004240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.004262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.004286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.004307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.004330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.004351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.004375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.004396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.004420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.004441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.004474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.004496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.004520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.004547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.004570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.004592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.004613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9300 is same with the state(5) to be set 00:29:51.697 [2024-07-13 22:13:11.004905] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f9300 was disconnected and freed. reset controller. 00:29:51.697 [2024-07-13 22:13:11.006363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.697 [2024-07-13 22:13:11.006396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.006421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.697 [2024-07-13 22:13:11.006442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.006463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.697 [2024-07-13 22:13:11.006483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.006503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.697 [2024-07-13 22:13:11.006523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.006541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(5) to be set 00:29:51.697 [2024-07-13 22:13:11.006586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:51.697 [2024-07-13 22:13:11.006632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4a80 (9): Bad file descriptor 00:29:51.697 [2024-07-13 22:13:11.006696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5200 (9): Bad file descriptor 00:29:51.697 [2024-07-13 22:13:11.006759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.697 [2024-07-13 22:13:11.006786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.006808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.697 [2024-07-13 22:13:11.006829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.006850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.697 [2024-07-13 22:13:11.006888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.006916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.697 [2024-07-13 22:13:11.006937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.006955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5980 is same with the state(5) to be set 00:29:51.697 [2024-07-13 22:13:11.008511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:29:51.697 [2024-07-13 22:13:11.008623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.008654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.008700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.008725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.008750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.008771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.008794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.008816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.008839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.008879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.008906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.008927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.008951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.008973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.008997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.697 [2024-07-13 22:13:11.009018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.697 [2024-07-13 22:13:11.009041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.009063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.009088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.009109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.009132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.009169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.009194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.009216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.009249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.009270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.009303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.009324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.009348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.009369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.009393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.009414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.009438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.009459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.009483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.009504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.009527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.009548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.009573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.009594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.009618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.009639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.009662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.009683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.009723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.009746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.009785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.009808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.009832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.009855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.009888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.009911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.009935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.009957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.009981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.010003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.010027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.010048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.010072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.010094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.010119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.010151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.010175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.010196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.010221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.010243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.010275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.010295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.010319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.010340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.010365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.010390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.010415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.010445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.010469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.010490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.010514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.010536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.010559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.010581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.010604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.010628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.010652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.010673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.010697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.010718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.010742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.010763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.698 [2024-07-13 22:13:11.010787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.698 [2024-07-13 22:13:11.010808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.010832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.010854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.010891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.010913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.010937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.010959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.010988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.011010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.011034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.011056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.011079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.011101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.011124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.011146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.011178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.011199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.011223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.011244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.011268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.011290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.011314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.011336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.011360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.011382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.011406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.011427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.011451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.011473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.011497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.011518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.011542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.011563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.011591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.011617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.011641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.011674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.011706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.011728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.011750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8680 is same with the state(5) to be set 00:29:51.699 [2024-07-13 22:13:11.013414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.013446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.013480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.013504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.013529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.013562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.013586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.013607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.013631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.013657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.013682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.013703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.013727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.013749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.013773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.013795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.013820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.013841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.013890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.013915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.013939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.013961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.013985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.014006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.014031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.014052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.014076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.014098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.014122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.014144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.014178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.014199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.014223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.014244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.014268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.014290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.014313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.014334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.014358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.014379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.014403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.014425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.014448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.014531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.014559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.699 [2024-07-13 22:13:11.014582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.699 [2024-07-13 22:13:11.014607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.014629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.014654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.014676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.014700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.014727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.014752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.014774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.014798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.014819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.014843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.014882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.014908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.014929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.014954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.014975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.015000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.015021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.015045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.015067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.015091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.015112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.015140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.015170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.015195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.015216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.015245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.015267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.015290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.015312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.015335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.015357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.015382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.015404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.015428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.015459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.015482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.015504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.015537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.015559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.015583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.015604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.015628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.015649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.015673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.015696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.015720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.015746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.015770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.015792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.015817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.015838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.015888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.015913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.015937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.015959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.015984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.016006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.016029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.016051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.016075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.016096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.016121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.016142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.016167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.016188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.016218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.016239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.016264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.016286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.016311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.016337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.016366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.016388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.016412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.016433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.016458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.016479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.016504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.016526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.016550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.016571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.016593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8900 is same with the state(5) to be set 00:29:51.700 [2024-07-13 22:13:11.018332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.700 [2024-07-13 22:13:11.018366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.700 [2024-07-13 22:13:11.018401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.018424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.018450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.018471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.018496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.018517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.018541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.018573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.018598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.018619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.018653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.018674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.018710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.018733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.018757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.018778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.018802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.018823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.018847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.018883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.018908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.018930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.018954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.018976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.019000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.019021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.019046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.019068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.019092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.019113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.019137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.019169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.019193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.019224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.019248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.019269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.019293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.019319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.019344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.019366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.019411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.019434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.019458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.019480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.019503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.019525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.019550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.019572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.019597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.019618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.019642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.019664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.019689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.019710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.019735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.019757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.019782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.019804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.019828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.019859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.019891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.019915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.019945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.019967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.019992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.020013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.020038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.020060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.020084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.020106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.020130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.020152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.020187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.020208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.020232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.020254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.020277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.020299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.020323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.020345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.020369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.020390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.020415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.020437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.020461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.701 [2024-07-13 22:13:11.020488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.701 [2024-07-13 22:13:11.020512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.020538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.020563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.020585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.020609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.020631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.020655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.020677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.020701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.020722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.020747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.020768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.020793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.020814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.020838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.020862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.020893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.020919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.020943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.020964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.020989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.021011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.021036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.021058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.021082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.021103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.021132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.021154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.021179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.021200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.021233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.021255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.021290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.021312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.021336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.021358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.021381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.021403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.021427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.021449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.021471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8b80 is same with the state(5) to be set 00:29:51.702 [2024-07-13 22:13:11.023137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.023171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.023225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.023251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.023288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.023310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.023334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.023356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.023379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.023406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.023438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.023470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.023494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.023515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.023538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.023559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.023583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.023604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.023628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.023649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.023673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.023694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.023717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.023739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.023762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.023784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.023818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.023839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.023863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.023911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.023936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.023958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.702 [2024-07-13 22:13:11.023982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.702 [2024-07-13 22:13:11.024003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.024026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.024052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.024077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.024099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.024122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.024143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.024167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.024199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.024226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.024251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.024274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.024296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.024319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.024341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.024364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.024386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.024409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.024431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.024454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.024476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.024499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.024521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.024556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.024578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.024602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.024623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.024651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.024673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.024697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.024719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.024742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.024764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.024787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.024809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.024833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.024861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.024895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.024918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.024942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.024964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.024987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.025009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.025033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.025055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.025078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.025099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.025123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.025144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.025177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.025198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.025232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.025258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.025282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.025313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.025337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.025358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.025382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.025403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.025427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.025459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.025483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.025504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.025528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.025549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.025573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.025594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.025618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.025639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.025662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.025684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.025708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.025729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.025752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.025774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.025797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.025819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.025842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.025885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.025911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.025933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.025957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.025979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.026003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.703 [2024-07-13 22:13:11.026024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.703 [2024-07-13 22:13:11.026048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.026069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.026093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.026115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.026138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.026171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.026195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.026228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.026251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.026273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.026301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f8e00 is same with the state(5) to be set 00:29:51.704 [2024-07-13 22:13:11.028664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:29:51.704 [2024-07-13 22:13:11.028714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:29:51.704 [2024-07-13 22:13:11.028742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.704 [2024-07-13 22:13:11.028767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:29:51.704 [2024-07-13 22:13:11.029078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.704 [2024-07-13 22:13:11.029118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4a80 with addr=10.0.0.2, port=4420 00:29:51.704 [2024-07-13 22:13:11.029143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4a80 is same with the state(5) to be set 00:29:51.704 [2024-07-13 22:13:11.029232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:29:51.704 [2024-07-13 22:13:11.029322] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:51.704 [2024-07-13 22:13:11.029359] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:51.704 [2024-07-13 22:13:11.029395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5980 (9): Bad file descriptor 00:29:51.704 [2024-07-13 22:13:11.029441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4a80 (9): Bad file descriptor 00:29:51.704 [2024-07-13 22:13:11.030207] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:51.704 [2024-07-13 22:13:11.030339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:29:51.704 [2024-07-13 22:13:11.030376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:29:51.704 [2024-07-13 22:13:11.030654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.704 [2024-07-13 22:13:11.030690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5200 with addr=10.0.0.2, port=4420 00:29:51.704 [2024-07-13 22:13:11.030714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5200 is same with the state(5) to be set 00:29:51.704 [2024-07-13 22:13:11.030900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.704 [2024-07-13 22:13:11.030935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6880 with addr=10.0.0.2, port=4420 00:29:51.704 [2024-07-13 22:13:11.030957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6880 is same with the state(5) to be set 00:29:51.704 [2024-07-13 22:13:11.031104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.704 [2024-07-13 22:13:11.031136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:29:51.704 [2024-07-13 22:13:11.031158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:29:51.704 [2024-07-13 22:13:11.031343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.704 [2024-07-13 22:13:11.031377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2c80 with addr=10.0.0.2, port=4420 00:29:51.704 [2024-07-13 22:13:11.031399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:29:51.704 [2024-07-13 22:13:11.033421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.033455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.033488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.033512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.033536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.033558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.033582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.033603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.033626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.033653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.033678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.033708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.033731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.033765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.033791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.033813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.033837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.033874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.033900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.033923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.033946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.033968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.033991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.034012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.034035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.034056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.034080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.034101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.034124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.034145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.034172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.034193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.034217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.034248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.034276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.034298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.034322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.034343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.034366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.034387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.034411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.034442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.034466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.034487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.034514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.034536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.034570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.034591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.034615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.034636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.704 [2024-07-13 22:13:11.034659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.704 [2024-07-13 22:13:11.034681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.034704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.034725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.034748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.034770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.034794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.034815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.034838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.034883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.034910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.034932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.034956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.034978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.035002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.035023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.035046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.035068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.035091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.035112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.035135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.035156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.035187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.035208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.035231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.035253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.035276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.035298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.035321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.035342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.035366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.035387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.035411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.035432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.035460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.035483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.035506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.035528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.035551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.035573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.035596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.035617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.035640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.035661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.035685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.035706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.035729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.035751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.035775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.035796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.035819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.035840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.035881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.035905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.035929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.035951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.035974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.035995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.036019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.036045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.036069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.036091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.036114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.036135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.036159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.036188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.036211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.036232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.036255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.036285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.036308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.036330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.036352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.036373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.036397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.036418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.036442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.705 [2024-07-13 22:13:11.036463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.705 [2024-07-13 22:13:11.036484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9080 is same with the state(5) to be set 00:29:51.705 [2024-07-13 22:13:11.038173] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:51.705 [2024-07-13 22:13:11.038220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:29:51.705 [2024-07-13 22:13:11.038416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.705 [2024-07-13 22:13:11.038453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3400 with addr=10.0.0.2, port=4420 00:29:51.705 [2024-07-13 22:13:11.038476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3400 is same with the state(5) to be set 00:29:51.705 [2024-07-13 22:13:11.038662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.705 [2024-07-13 22:13:11.038696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3b80 with addr=10.0.0.2, port=4420 00:29:51.705 [2024-07-13 22:13:11.038724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3b80 is same with the state(5) to be set 00:29:51.705 [2024-07-13 22:13:11.038751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5200 (9): Bad file descriptor 00:29:51.705 [2024-07-13 22:13:11.038781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6880 (9): Bad file descriptor 00:29:51.706 [2024-07-13 22:13:11.038808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:51.706 [2024-07-13 22:13:11.038835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2c80 (9): Bad file descriptor 00:29:51.706 [2024-07-13 22:13:11.038891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:29:51.706 [2024-07-13 22:13:11.038913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:29:51.706 [2024-07-13 22:13:11.038936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:29:51.706 [2024-07-13 22:13:11.039150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.706 [2024-07-13 22:13:11.039336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.706 [2024-07-13 22:13:11.039370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:29:51.706 [2024-07-13 22:13:11.039392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(5) to be set 00:29:51.706 [2024-07-13 22:13:11.039418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3400 (9): Bad file descriptor 00:29:51.706 [2024-07-13 22:13:11.039446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3b80 (9): Bad file descriptor 00:29:51.706 [2024-07-13 22:13:11.039470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:29:51.706 [2024-07-13 22:13:11.039489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:29:51.706 [2024-07-13 22:13:11.039507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:29:51.706 [2024-07-13 22:13:11.039536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:29:51.706 [2024-07-13 22:13:11.039556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:29:51.706 [2024-07-13 22:13:11.039574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:29:51.706 [2024-07-13 22:13:11.039600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.706 [2024-07-13 22:13:11.039620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.706 [2024-07-13 22:13:11.039638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.706 [2024-07-13 22:13:11.039663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:29:51.706 [2024-07-13 22:13:11.039682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:29:51.706 [2024-07-13 22:13:11.039700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:51.706 [2024-07-13 22:13:11.039757] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:51.706 [2024-07-13 22:13:11.039786] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:51.706 [2024-07-13 22:13:11.039811] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:51.706 [2024-07-13 22:13:11.039852] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:51.706 [2024-07-13 22:13:11.040400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.706 [2024-07-13 22:13:11.040437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.706 [2024-07-13 22:13:11.040455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.706 [2024-07-13 22:13:11.040472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.706 [2024-07-13 22:13:11.040517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:51.706 [2024-07-13 22:13:11.040544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:29:51.706 [2024-07-13 22:13:11.040563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:29:51.706 [2024-07-13 22:13:11.040590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:29:51.706 [2024-07-13 22:13:11.040615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:29:51.706 [2024-07-13 22:13:11.040644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:29:51.706 [2024-07-13 22:13:11.040661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:29:51.706 [2024-07-13 22:13:11.040786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.706 [2024-07-13 22:13:11.040816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.706 [2024-07-13 22:13:11.040848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.706 [2024-07-13 22:13:11.040889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.706 [2024-07-13 22:13:11.040916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.706 [2024-07-13 22:13:11.040938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.706 [2024-07-13 22:13:11.040961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.706 [2024-07-13 22:13:11.040983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.706 [2024-07-13 22:13:11.041007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.706 [2024-07-13 22:13:11.041028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.706 [2024-07-13 22:13:11.041051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.706 [2024-07-13 22:13:11.041073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.706 [2024-07-13 22:13:11.041097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.706 [2024-07-13 22:13:11.041118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.706 [2024-07-13 22:13:11.041142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.706 [2024-07-13 22:13:11.041176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.706 [2024-07-13 22:13:11.041201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.706 [2024-07-13 22:13:11.041223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.706 [2024-07-13 22:13:11.041246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.706 [2024-07-13 22:13:11.041267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.706 [2024-07-13 22:13:11.041290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.706 [2024-07-13 22:13:11.041312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.706 [2024-07-13 22:13:11.041335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.706 [2024-07-13 22:13:11.041366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.706 [2024-07-13 22:13:11.041390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.706 [2024-07-13 22:13:11.041411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.706 [2024-07-13 22:13:11.041441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.706 [2024-07-13 22:13:11.041462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.706 [2024-07-13 22:13:11.041485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.706 [2024-07-13 22:13:11.041506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.706 [2024-07-13 22:13:11.041529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.706 [2024-07-13 22:13:11.041558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.706 [2024-07-13 22:13:11.041581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.706 [2024-07-13 22:13:11.041602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.706 [2024-07-13 22:13:11.041637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.706 [2024-07-13 22:13:11.041658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.706 [2024-07-13 22:13:11.041681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.706 [2024-07-13 22:13:11.041703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.706 [2024-07-13 22:13:11.041726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.706 [2024-07-13 22:13:11.041747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.041775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.041797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.041821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.041842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.041881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.041905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.041929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.041950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.041974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.041995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.042018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.042040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.042063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.042084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.042107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.042130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.042161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.042182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.042204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.042232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.042256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.042278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.042301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.042322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.042346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.042372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.042395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.042418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.042443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.042465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.042489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.042510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.042534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.042555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.042578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.042600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.042623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.042645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.042669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.042691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.042715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.042737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.042762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.042784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.042808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.042831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.042873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.042898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.042923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.042945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.042974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.042997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.043036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.043058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.043082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.043104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.043128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.043149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.043179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.043200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.043223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.043249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.043272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.043294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.043317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.043348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.043371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.043393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.043416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.043442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.043466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.043487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.043511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.043532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.043556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.043581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.043605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.043631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.043655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.043676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.043700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.043721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.043745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.043766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.707 [2024-07-13 22:13:11.043790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.707 [2024-07-13 22:13:11.043811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.043840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.043862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.043907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9800 is same with the state(5) to be set 00:29:51.708 [2024-07-13 22:13:11.045515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.045548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.045590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.045615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.045641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.045663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.045686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.045707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.045730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.045752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.045775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.045801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.045825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.045847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.045878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.045901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.045925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.045946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.045970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.045991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.046015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.046036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.046059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.046080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.046103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.046124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.046146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.046168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.046191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.046212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.046235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.046256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.046279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.046300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.046330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.046351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.046374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.046400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.046435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.046456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.046479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.046507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.046531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.046552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.046575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.046604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.046627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.046648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.046672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.046693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.046716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.046737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.046760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.046781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.046804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.046825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.046848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.046876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.046902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.046923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.046946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.046968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.046995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.047017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.047041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.047062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.047085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.047106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.047129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.047150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.047173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.047194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.047217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.047244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.047266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.047288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.047315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.047336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.047359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.047380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.708 [2024-07-13 22:13:11.047403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.708 [2024-07-13 22:13:11.047425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.709 [2024-07-13 22:13:11.047448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.709 [2024-07-13 22:13:11.047469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.709 [2024-07-13 22:13:11.047494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.709 [2024-07-13 22:13:11.047515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.709 [2024-07-13 22:13:11.047538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.709 [2024-07-13 22:13:11.047575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.709 [2024-07-13 22:13:11.047600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.709 [2024-07-13 22:13:11.047621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.709 [2024-07-13 22:13:11.047644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.709 [2024-07-13 22:13:11.047679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.709 [2024-07-13 22:13:11.047705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.709 [2024-07-13 22:13:11.047727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.709 [2024-07-13 22:13:11.047750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.709 [2024-07-13 22:13:11.047771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.709 [2024-07-13 22:13:11.047795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.709 [2024-07-13 22:13:11.047828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.709 [2024-07-13 22:13:11.047851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.709 [2024-07-13 22:13:11.047881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.709 [2024-07-13 22:13:11.047907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.709 [2024-07-13 22:13:11.047928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.709 [2024-07-13 22:13:11.047951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.709 [2024-07-13 22:13:11.047973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.709 [2024-07-13 22:13:11.047997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.709 [2024-07-13 22:13:11.048019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.709 [2024-07-13 22:13:11.048042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.709 [2024-07-13 22:13:11.048064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.709 [2024-07-13 22:13:11.048087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.709 [2024-07-13 22:13:11.048109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.709 [2024-07-13 22:13:11.048132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.709 [2024-07-13 22:13:11.048154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.709 [2024-07-13 22:13:11.048181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.709 [2024-07-13 22:13:11.048203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.709 [2024-07-13 22:13:11.048226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.709 [2024-07-13 22:13:11.048248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.709 [2024-07-13 22:13:11.048271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.709 [2024-07-13 22:13:11.048292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.709 [2024-07-13 22:13:11.048316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.709 [2024-07-13 22:13:11.048337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.709 [2024-07-13 22:13:11.048360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.709 [2024-07-13 22:13:11.048381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.709 [2024-07-13 22:13:11.048405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.709 [2024-07-13 22:13:11.048426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.709 [2024-07-13 22:13:11.048449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.709 [2024-07-13 22:13:11.048471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.709 [2024-07-13 22:13:11.048494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.709 [2024-07-13 22:13:11.048515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.709 [2024-07-13 22:13:11.048536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f9a80 is same with the state(5) to be set 00:29:51.709 [2024-07-13 22:13:11.053180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:29:51.709 [2024-07-13 22:13:11.053227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.709 [2024-07-13 22:13:11.053250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.709 [2024-07-13 22:13:11.053269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:29:51.967 task offset: 19200 on job bdev=Nvme10n1 fails 00:29:51.967 00:29:51.967 Latency(us) 00:29:51.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.967 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:51.967 Job: Nvme1n1 ended in about 0.96 seconds with error 00:29:51.967 Verification LBA range: start 0x0 length 0x400 00:29:51.967 Nvme1n1 : 0.96 132.99 8.31 66.50 0.00 316537.11 21068.61 296708.17 00:29:51.967 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:51.967 Job: Nvme2n1 ended in about 0.97 seconds with error 00:29:51.967 Verification LBA range: start 0x0 length 0x400 00:29:51.967 Nvme2n1 : 0.97 132.33 8.27 66.16 0.00 311362.24 26408.58 312242.63 00:29:51.967 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:51.967 Job: Nvme3n1 ended in about 0.97 seconds with error 00:29:51.967 Verification LBA range: start 0x0 length 0x400 00:29:51.967 Nvme3n1 : 0.97 131.66 8.23 65.83 0.00 306401.03 50098.63 304475.40 00:29:51.967 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:51.967 Job: Nvme4n1 ended in about 0.98 seconds with error 00:29:51.967 Verification LBA range: start 0x0 length 0x400 00:29:51.967 Nvme4n1 : 0.98 131.02 8.19 65.51 0.00 301369.27 23107.51 324670.20 00:29:51.967 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:51.967 Job: Nvme5n1 ended in about 0.99 seconds with error 00:29:51.967 Verification LBA range: start 0x0 length 0x400 00:29:51.967 Nvme5n1 : 0.99 129.67 8.10 64.84 0.00 298078.81 22913.33 292047.83 00:29:51.967 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:51.967 Job: Nvme6n1 ended in about 0.96 seconds with error 00:29:51.967 Verification LBA range: start 0x0 length 0x400 00:29:51.967 Nvme6n1 : 0.96 133.65 8.35 66.83 0.00 281708.97 19709.35 302921.96 00:29:51.967 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:51.967 Job: Nvme7n1 ended in about 0.96 seconds with error 00:29:51.967 Verification LBA range: start 0x0 length 0x400 00:29:51.967 Nvme7n1 : 0.96 133.96 8.37 66.98 0.00 274415.44 9272.13 355739.12 00:29:51.967 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:51.967 Job: Nvme8n1 ended in about 0.99 seconds with error 00:29:51.967 Verification LBA range: start 0x0 length 0x400 00:29:51.967 Nvme8n1 : 0.99 128.71 8.04 64.35 0.00 280856.65 26796.94 329330.54 00:29:51.967 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:51.967 Job: Nvme9n1 ended in about 1.00 seconds with error 00:29:51.967 Verification LBA range: start 0x0 length 0x400 00:29:51.967 Nvme9n1 : 1.00 128.11 8.01 64.06 0.00 275862.57 35535.08 250104.79 00:29:51.967 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:51.967 Job: Nvme10n1 ended in about 0.94 seconds with error 00:29:51.967 Verification LBA range: start 0x0 length 0x400 00:29:51.967 Nvme10n1 : 0.94 136.16 8.51 68.08 0.00 249595.45 11019.76 315349.52 00:29:51.967 =================================================================================================================== 00:29:51.967 Total : 1318.27 82.39 659.14 0.00 289618.75 9272.13 355739.12 00:29:51.967 [2024-07-13 22:13:11.136484] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:51.967 [2024-07-13 22:13:11.136594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:29:51.967 [2024-07-13 22:13:11.136708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:29:51.967 [2024-07-13 22:13:11.136736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:29:51.967 [2024-07-13 22:13:11.136759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:29:51.967 [2024-07-13 22:13:11.136861] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:51.967 [2024-07-13 22:13:11.137093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.967 [2024-07-13 22:13:11.137491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.968 [2024-07-13 22:13:11.137533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4a80 with addr=10.0.0.2, port=4420 00:29:51.968 [2024-07-13 22:13:11.137560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4a80 is same with the state(5) to be set 00:29:51.968 [2024-07-13 22:13:11.137760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.968 [2024-07-13 22:13:11.137802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5980 with addr=10.0.0.2, port=4420 00:29:51.968 [2024-07-13 22:13:11.137826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5980 is same with the state(5) to be set 00:29:51.968 [2024-07-13 22:13:11.138052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.968 [2024-07-13 22:13:11.138087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6100 with addr=10.0.0.2, port=4420 00:29:51.968 [2024-07-13 22:13:11.138109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6100 is same with the state(5) to be set 00:29:51.968 [2024-07-13 22:13:11.138151] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:51.968 [2024-07-13 22:13:11.138181] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:51.968 [2024-07-13 22:13:11.138215] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:51.968 [2024-07-13 22:13:11.138241] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:51.968 [2024-07-13 22:13:11.138265] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:51.968 [2024-07-13 22:13:11.138290] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:29:51.968 [2024-07-13 22:13:11.139358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:29:51.968 [2024-07-13 22:13:11.139410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:29:51.968 [2024-07-13 22:13:11.139435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:29:51.968 [2024-07-13 22:13:11.139458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.968 [2024-07-13 22:13:11.139481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:29:51.968 [2024-07-13 22:13:11.139503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:29:51.968 [2024-07-13 22:13:11.139648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4a80 (9): Bad file descriptor 00:29:51.968 [2024-07-13 22:13:11.139688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5980 (9): Bad file descriptor 00:29:51.968 [2024-07-13 22:13:11.139716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6100 (9): Bad file descriptor 00:29:51.968 [2024-07-13 22:13:11.139845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:29:51.968 [2024-07-13 22:13:11.140085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.968 [2024-07-13 22:13:11.140122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3b80 with addr=10.0.0.2, port=4420 00:29:51.968 [2024-07-13 22:13:11.140146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3b80 is same with the state(5) to be set 00:29:51.968 [2024-07-13 22:13:11.140335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.968 [2024-07-13 22:13:11.140369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3400 with addr=10.0.0.2, port=4420 00:29:51.968 [2024-07-13 22:13:11.140391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f3400 is same with the state(5) to be set 00:29:51.968 [2024-07-13 22:13:11.140567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.968 [2024-07-13 22:13:11.140600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2c80 with addr=10.0.0.2, port=4420 00:29:51.968 [2024-07-13 22:13:11.140622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:29:51.968 [2024-07-13 22:13:11.140782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.968 [2024-07-13 22:13:11.140815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:29:51.968 [2024-07-13 22:13:11.140837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:29:51.968 [2024-07-13 22:13:11.141002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.968 [2024-07-13 22:13:11.141035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f6880 with addr=10.0.0.2, port=4420 00:29:51.968 [2024-07-13 22:13:11.141057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f6880 is same with the state(5) to be set 00:29:51.968 [2024-07-13 22:13:11.141241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.968 [2024-07-13 22:13:11.141274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f5200 with addr=10.0.0.2, port=4420 00:29:51.968 [2024-07-13 22:13:11.141296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f5200 is same with the state(5) to be set 00:29:51.968 [2024-07-13 22:13:11.141317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:29:51.968 [2024-07-13 22:13:11.141335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:29:51.968 [2024-07-13 22:13:11.141353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:29:51.968 [2024-07-13 22:13:11.141380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:29:51.968 [2024-07-13 22:13:11.141400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:29:51.968 [2024-07-13 22:13:11.141418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:29:51.968 [2024-07-13 22:13:11.141443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:29:51.968 [2024-07-13 22:13:11.141462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:29:51.968 [2024-07-13 22:13:11.141480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:29:51.968 [2024-07-13 22:13:11.141564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.968 [2024-07-13 22:13:11.141591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.968 [2024-07-13 22:13:11.141609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.968 [2024-07-13 22:13:11.141772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.968 [2024-07-13 22:13:11.141806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f4300 with addr=10.0.0.2, port=4420 00:29:51.968 [2024-07-13 22:13:11.141828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f4300 is same with the state(5) to be set 00:29:51.968 [2024-07-13 22:13:11.141855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3b80 (9): Bad file descriptor 00:29:51.968 [2024-07-13 22:13:11.141893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f3400 (9): Bad file descriptor 00:29:51.968 [2024-07-13 22:13:11.141921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2c80 (9): Bad file descriptor 00:29:51.968 [2024-07-13 22:13:11.141948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:29:51.968 [2024-07-13 22:13:11.141976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f6880 (9): Bad file descriptor 00:29:51.968 [2024-07-13 22:13:11.142008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f5200 (9): Bad file descriptor 00:29:51.968 [2024-07-13 22:13:11.142095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f4300 (9): Bad file descriptor 00:29:51.968 [2024-07-13 22:13:11.142128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:29:51.968 [2024-07-13 22:13:11.142147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:29:51.968 [2024-07-13 22:13:11.142165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:29:51.968 [2024-07-13 22:13:11.142191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:29:51.968 [2024-07-13 22:13:11.142210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:29:51.968 [2024-07-13 22:13:11.142228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:29:51.968 [2024-07-13 22:13:11.142253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:29:51.968 [2024-07-13 22:13:11.142272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:29:51.968 [2024-07-13 22:13:11.142289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:29:51.968 [2024-07-13 22:13:11.142313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.968 [2024-07-13 22:13:11.142332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.968 [2024-07-13 22:13:11.142349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.968 [2024-07-13 22:13:11.142373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:29:51.968 [2024-07-13 22:13:11.142391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:29:51.968 [2024-07-13 22:13:11.142424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:29:51.968 [2024-07-13 22:13:11.142451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:29:51.968 [2024-07-13 22:13:11.142471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:29:51.968 [2024-07-13 22:13:11.142488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:29:51.968 [2024-07-13 22:13:11.142546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.968 [2024-07-13 22:13:11.142572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.968 [2024-07-13 22:13:11.142590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.968 [2024-07-13 22:13:11.142606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.968 [2024-07-13 22:13:11.142623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.968 [2024-07-13 22:13:11.142639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.968 [2024-07-13 22:13:11.142656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:29:51.968 [2024-07-13 22:13:11.142673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:29:51.968 [2024-07-13 22:13:11.142691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:29:51.968 [2024-07-13 22:13:11.142753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.290 22:13:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:29:55.290 22:13:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:29:55.858 22:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 4168721 00:29:55.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (4168721) - No such process 00:29:55.858 22:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:29:55.858 22:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:29:55.858 22:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:29:55.858 22:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:55.858 22:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:55.858 22:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:29:55.858 22:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:55.858 22:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:29:55.858 22:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:55.858 22:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:29:55.858 22:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:55.858 22:13:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:55.858 rmmod nvme_tcp 00:29:55.858 rmmod nvme_fabrics 00:29:55.858 rmmod nvme_keyring 00:29:55.858 22:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:55.858 22:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:29:55.858 22:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:29:55.858 22:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:29:55.858 22:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:55.858 22:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:55.858 22:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:55.858 22:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:55.858 22:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:55.858 22:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.858 22:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:55.858 22:13:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.759 22:13:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:57.759 00:29:57.759 real 0m11.906s 00:29:57.759 user 0m34.906s 00:29:57.759 sys 0m2.038s 00:29:57.759 22:13:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:57.759 22:13:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:57.759 ************************************ 00:29:57.759 END TEST nvmf_shutdown_tc3 00:29:57.759 ************************************ 00:29:57.759 22:13:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:29:57.759 22:13:17 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:29:57.759 00:29:57.759 real 0m42.451s 00:29:57.759 user 2m13.617s 00:29:57.759 sys 0m8.143s 00:29:57.759 22:13:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:57.759 22:13:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:57.759 ************************************ 00:29:57.759 END TEST nvmf_shutdown 00:29:57.759 ************************************ 00:29:57.759 22:13:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:57.759 22:13:17 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:29:57.759 22:13:17 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:57.759 22:13:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:57.759 22:13:17 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:29:57.759 22:13:17 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:57.759 22:13:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:57.759 22:13:17 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:29:57.759 22:13:17 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:57.759 22:13:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:57.759 22:13:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:57.759 22:13:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:58.018 ************************************ 00:29:58.018 START TEST nvmf_multicontroller 00:29:58.018 ************************************ 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:58.018 * Looking for test storage... 00:29:58.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:58.018 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:58.019 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:58.019 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:58.019 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:58.019 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.019 22:13:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:58.019 22:13:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.019 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:58.019 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:58.019 22:13:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:29:58.019 22:13:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:59.918 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:59.918 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:29:59.918 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:59.918 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:59.918 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:59.918 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:59.918 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:59.918 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:29:59.918 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:59.918 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:29:59.918 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:29:59.918 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:29:59.918 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:29:59.918 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:29:59.918 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:29:59.918 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.918 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.918 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.918 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.918 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.918 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.918 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.918 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.918 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.918 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.918 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:59.919 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:59.919 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:59.919 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:59.919 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:59.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:29:59.919 00:29:59.919 --- 10.0.0.2 ping statistics --- 00:29:59.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.919 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:29:59.919 00:29:59.919 --- 10.0.0.1 ping statistics --- 00:29:59.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.919 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:59.919 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:00.177 22:13:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:00.177 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:00.177 22:13:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:00.177 22:13:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:00.177 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=4171629 00:30:00.177 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:00.177 22:13:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 4171629 00:30:00.177 22:13:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 4171629 ']' 00:30:00.177 22:13:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.177 22:13:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:00.177 22:13:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.177 22:13:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:00.177 22:13:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:00.177 [2024-07-13 22:13:19.406618] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:00.177 [2024-07-13 22:13:19.406749] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:00.177 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.177 [2024-07-13 22:13:19.544668] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:00.435 [2024-07-13 22:13:19.800044] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:00.435 [2024-07-13 22:13:19.800121] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:00.435 [2024-07-13 22:13:19.800153] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:00.435 [2024-07-13 22:13:19.800174] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:00.435 [2024-07-13 22:13:19.800195] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:00.435 [2024-07-13 22:13:19.800330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:00.435 [2024-07-13 22:13:19.800432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.435 [2024-07-13 22:13:19.800441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:01.000 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:01.000 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:30:01.000 22:13:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:01.000 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:01.000 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.000 22:13:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:01.000 22:13:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:01.000 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.000 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.000 [2024-07-13 22:13:20.346478] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.000 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.000 22:13:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:01.000 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.000 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.258 Malloc0 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.258 [2024-07-13 22:13:20.462692] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.258 [2024-07-13 22:13:20.470558] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.258 Malloc1 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.258 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.259 22:13:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:01.259 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.259 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.259 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.259 22:13:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=4171782 00:30:01.259 22:13:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:01.259 22:13:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 4171782 /var/tmp/bdevperf.sock 00:30:01.259 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 4171782 ']' 00:30:01.259 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:01.259 22:13:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:01.259 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:01.259 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:01.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:01.259 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:01.259 22:13:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.226 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:02.226 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:30:02.226 22:13:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:30:02.226 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.226 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.484 NVMe0n1 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.484 1 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.484 request: 00:30:02.484 { 00:30:02.484 "name": "NVMe0", 00:30:02.484 "trtype": "tcp", 00:30:02.484 "traddr": "10.0.0.2", 00:30:02.484 "adrfam": "ipv4", 00:30:02.484 "trsvcid": "4420", 00:30:02.484 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:02.484 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:02.484 "hostaddr": "10.0.0.2", 00:30:02.484 "hostsvcid": "60000", 00:30:02.484 "prchk_reftag": false, 00:30:02.484 "prchk_guard": false, 00:30:02.484 "hdgst": false, 00:30:02.484 "ddgst": false, 00:30:02.484 "method": "bdev_nvme_attach_controller", 00:30:02.484 "req_id": 1 00:30:02.484 } 00:30:02.484 Got JSON-RPC error response 00:30:02.484 response: 00:30:02.484 { 00:30:02.484 "code": -114, 00:30:02.484 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:30:02.484 } 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.484 request: 00:30:02.484 { 00:30:02.484 "name": "NVMe0", 00:30:02.484 "trtype": "tcp", 00:30:02.484 "traddr": "10.0.0.2", 00:30:02.484 "adrfam": "ipv4", 00:30:02.484 "trsvcid": "4420", 00:30:02.484 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:02.484 "hostaddr": "10.0.0.2", 00:30:02.484 "hostsvcid": "60000", 00:30:02.484 "prchk_reftag": false, 00:30:02.484 "prchk_guard": false, 00:30:02.484 "hdgst": false, 00:30:02.484 "ddgst": false, 00:30:02.484 "method": "bdev_nvme_attach_controller", 00:30:02.484 "req_id": 1 00:30:02.484 } 00:30:02.484 Got JSON-RPC error response 00:30:02.484 response: 00:30:02.484 { 00:30:02.484 "code": -114, 00:30:02.484 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:30:02.484 } 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.484 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.484 request: 00:30:02.484 { 00:30:02.484 "name": "NVMe0", 00:30:02.484 "trtype": "tcp", 00:30:02.484 "traddr": "10.0.0.2", 00:30:02.484 "adrfam": "ipv4", 00:30:02.484 "trsvcid": "4420", 00:30:02.484 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:02.484 "hostaddr": "10.0.0.2", 00:30:02.484 "hostsvcid": "60000", 00:30:02.484 "prchk_reftag": false, 00:30:02.484 "prchk_guard": false, 00:30:02.484 "hdgst": false, 00:30:02.484 "ddgst": false, 00:30:02.484 "multipath": "disable", 00:30:02.484 "method": "bdev_nvme_attach_controller", 00:30:02.485 "req_id": 1 00:30:02.485 } 00:30:02.485 Got JSON-RPC error response 00:30:02.485 response: 00:30:02.485 { 00:30:02.485 "code": -114, 00:30:02.485 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:30:02.485 } 00:30:02.485 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:02.485 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:02.485 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:02.485 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:02.485 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:02.485 22:13:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:30:02.485 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:30:02.485 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:30:02.485 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:02.485 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:02.485 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:02.485 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:02.485 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:30:02.485 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.485 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.485 request: 00:30:02.485 { 00:30:02.485 "name": "NVMe0", 00:30:02.485 "trtype": "tcp", 00:30:02.485 "traddr": "10.0.0.2", 00:30:02.485 "adrfam": "ipv4", 00:30:02.485 "trsvcid": "4420", 00:30:02.485 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:02.485 "hostaddr": "10.0.0.2", 00:30:02.485 "hostsvcid": "60000", 00:30:02.485 "prchk_reftag": false, 00:30:02.485 "prchk_guard": false, 00:30:02.485 "hdgst": false, 00:30:02.485 "ddgst": false, 00:30:02.485 "multipath": "failover", 00:30:02.485 "method": "bdev_nvme_attach_controller", 00:30:02.485 "req_id": 1 00:30:02.485 } 00:30:02.485 Got JSON-RPC error response 00:30:02.485 response: 00:30:02.485 { 00:30:02.485 "code": -114, 00:30:02.485 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:30:02.485 } 00:30:02.485 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:02.485 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:30:02.485 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:02.485 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:02.485 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:02.485 22:13:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:02.485 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.485 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.743 00:30:02.743 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.743 22:13:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:02.743 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.743 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.743 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.743 22:13:21 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:30:02.743 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.743 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.743 00:30:02.743 22:13:21 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.743 22:13:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:02.743 22:13:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:02.743 22:13:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:02.743 22:13:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.743 22:13:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:02.743 22:13:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:02.743 22:13:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:04.116 0 00:30:04.116 22:13:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:04.116 22:13:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:04.116 22:13:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:04.116 22:13:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:04.117 22:13:23 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 4171782 00:30:04.117 22:13:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 4171782 ']' 00:30:04.117 22:13:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 4171782 00:30:04.117 22:13:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:30:04.117 22:13:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:04.117 22:13:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4171782 00:30:04.117 22:13:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:04.117 22:13:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:04.117 22:13:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4171782' 00:30:04.117 killing process with pid 4171782 00:30:04.117 22:13:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 4171782 00:30:04.117 22:13:23 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 4171782 00:30:05.050 22:13:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:05.050 22:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.050 22:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:05.050 22:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.050 22:13:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:05.050 22:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.050 22:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:05.050 22:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.050 22:13:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:30:05.050 22:13:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:05.050 22:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:30:05.050 22:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:05.050 22:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:30:05.050 22:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:30:05.050 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:05.050 [2024-07-13 22:13:20.656516] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:05.050 [2024-07-13 22:13:20.656672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4171782 ] 00:30:05.050 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.050 [2024-07-13 22:13:20.780039] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.050 [2024-07-13 22:13:21.015539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.050 [2024-07-13 22:13:21.997114] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name f7daf120-cfd4-4dc3-ae00-efab0c45431f already exists 00:30:05.050 [2024-07-13 22:13:21.997198] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:f7daf120-cfd4-4dc3-ae00-efab0c45431f alias for bdev NVMe1n1 00:30:05.050 [2024-07-13 22:13:21.997237] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:05.050 Running I/O for 1 seconds... 00:30:05.050 00:30:05.051 Latency(us) 00:30:05.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.051 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:05.051 NVMe0n1 : 1.01 13222.76 51.65 0.00 0.00 9662.96 2973.39 18641.35 00:30:05.051 =================================================================================================================== 00:30:05.051 Total : 13222.76 51.65 0.00 0.00 9662.96 2973.39 18641.35 00:30:05.051 Received shutdown signal, test time was about 1.000000 seconds 00:30:05.051 00:30:05.051 Latency(us) 00:30:05.051 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.051 =================================================================================================================== 00:30:05.051 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:05.051 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:05.051 22:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:05.051 22:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:30:05.051 22:13:24 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:30:05.051 22:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:05.051 22:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:30:05.051 22:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:05.051 22:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:30:05.051 22:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:05.051 22:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:05.051 rmmod nvme_tcp 00:30:05.051 rmmod nvme_fabrics 00:30:05.051 rmmod nvme_keyring 00:30:05.051 22:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:05.051 22:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:30:05.051 22:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:30:05.051 22:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 4171629 ']' 00:30:05.051 22:13:24 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 4171629 00:30:05.051 22:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 4171629 ']' 00:30:05.051 22:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 4171629 00:30:05.051 22:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:30:05.051 22:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:05.051 22:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4171629 00:30:05.051 22:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:05.051 22:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:05.051 22:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4171629' 00:30:05.051 killing process with pid 4171629 00:30:05.051 22:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 4171629 00:30:05.051 22:13:24 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 4171629 00:30:06.424 22:13:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:06.424 22:13:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:06.424 22:13:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:06.424 22:13:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:06.424 22:13:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:06.424 22:13:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.424 22:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:06.424 22:13:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.955 22:13:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:08.955 00:30:08.955 real 0m10.688s 00:30:08.955 user 0m21.518s 00:30:08.955 sys 0m2.619s 00:30:08.955 22:13:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:08.955 22:13:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:08.955 ************************************ 00:30:08.955 END TEST nvmf_multicontroller 00:30:08.955 ************************************ 00:30:08.955 22:13:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:08.955 22:13:27 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:08.955 22:13:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:08.955 22:13:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:08.955 22:13:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:08.955 ************************************ 00:30:08.956 START TEST nvmf_aer 00:30:08.956 ************************************ 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:08.956 * Looking for test storage... 00:30:08.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:30:08.956 22:13:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:10.881 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:10.881 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:10.881 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:10.881 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:10.881 22:13:29 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:10.881 22:13:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:10.881 22:13:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:10.881 22:13:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:10.882 22:13:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:10.882 22:13:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:10.882 22:13:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:10.882 22:13:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:10.882 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:10.882 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:30:10.882 00:30:10.882 --- 10.0.0.2 ping statistics --- 00:30:10.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.882 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:30:10.882 22:13:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:10.882 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:10.882 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:30:10.882 00:30:10.882 --- 10.0.0.1 ping statistics --- 00:30:10.882 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.882 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:30:10.882 22:13:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:10.882 22:13:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:30:10.882 22:13:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:10.882 22:13:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:10.882 22:13:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:10.882 22:13:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:10.882 22:13:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:10.882 22:13:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:10.882 22:13:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:10.882 22:13:30 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:10.882 22:13:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:10.882 22:13:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:10.882 22:13:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:10.882 22:13:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=4174255 00:30:10.882 22:13:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:10.882 22:13:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 4174255 00:30:10.882 22:13:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 4174255 ']' 00:30:10.882 22:13:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:10.882 22:13:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:10.882 22:13:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:10.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:10.882 22:13:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:10.882 22:13:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:10.882 [2024-07-13 22:13:30.223356] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:10.882 [2024-07-13 22:13:30.223500] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:11.140 EAL: No free 2048 kB hugepages reported on node 1 00:30:11.140 [2024-07-13 22:13:30.360239] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:11.398 [2024-07-13 22:13:30.602425] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:11.398 [2024-07-13 22:13:30.602507] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:11.398 [2024-07-13 22:13:30.602535] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:11.398 [2024-07-13 22:13:30.602556] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:11.398 [2024-07-13 22:13:30.602576] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:11.398 [2024-07-13 22:13:30.602700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.398 [2024-07-13 22:13:30.602771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:11.398 [2024-07-13 22:13:30.602853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.398 [2024-07-13 22:13:30.602864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:11.964 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:11.964 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:30:11.964 22:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:11.964 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:11.964 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:11.964 22:13:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:11.964 22:13:31 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:11.964 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.964 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:11.964 [2024-07-13 22:13:31.208375] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:11.964 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.964 22:13:31 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:11.964 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.964 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:11.964 Malloc0 00:30:11.964 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.964 22:13:31 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:11.965 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.965 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:11.965 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.965 22:13:31 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:11.965 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.965 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:11.965 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.965 22:13:31 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:11.965 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.965 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:11.965 [2024-07-13 22:13:31.314232] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:11.965 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.965 22:13:31 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:11.965 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.965 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:11.965 [ 00:30:11.965 { 00:30:11.965 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:11.965 "subtype": "Discovery", 00:30:11.965 "listen_addresses": [], 00:30:11.965 "allow_any_host": true, 00:30:11.965 "hosts": [] 00:30:11.965 }, 00:30:11.965 { 00:30:11.965 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:11.965 "subtype": "NVMe", 00:30:11.965 "listen_addresses": [ 00:30:11.965 { 00:30:11.965 "trtype": "TCP", 00:30:11.965 "adrfam": "IPv4", 00:30:11.965 "traddr": "10.0.0.2", 00:30:11.965 "trsvcid": "4420" 00:30:11.965 } 00:30:11.965 ], 00:30:11.965 "allow_any_host": true, 00:30:11.965 "hosts": [], 00:30:11.965 "serial_number": "SPDK00000000000001", 00:30:11.965 "model_number": "SPDK bdev Controller", 00:30:11.965 "max_namespaces": 2, 00:30:11.965 "min_cntlid": 1, 00:30:11.965 "max_cntlid": 65519, 00:30:11.965 "namespaces": [ 00:30:11.965 { 00:30:11.965 "nsid": 1, 00:30:11.965 "bdev_name": "Malloc0", 00:30:11.965 "name": "Malloc0", 00:30:11.965 "nguid": "5152917A5C034EB09272DDD88B8A2A01", 00:30:11.965 "uuid": "5152917a-5c03-4eb0-9272-ddd88b8a2a01" 00:30:11.965 } 00:30:11.965 ] 00:30:11.965 } 00:30:11.965 ] 00:30:11.965 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.965 22:13:31 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:11.965 22:13:31 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:11.965 22:13:31 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=4174413 00:30:11.965 22:13:31 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:11.965 22:13:31 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:11.965 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:30:11.965 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:11.965 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:30:11.965 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:30:11.965 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:12.222 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:12.222 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:30:12.222 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:30:12.222 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:12.222 EAL: No free 2048 kB hugepages reported on node 1 00:30:12.222 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:12.222 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:30:12.223 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:30:12.223 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:12.480 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:12.480 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 3 -lt 200 ']' 00:30:12.480 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=4 00:30:12.480 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:12.480 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:12.480 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 4 -lt 200 ']' 00:30:12.480 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=5 00:30:12.480 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:30:12.480 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:12.480 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:12.480 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:30:12.480 22:13:31 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:12.480 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.480 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:12.738 Malloc1 00:30:12.738 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.738 22:13:31 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:12.738 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.738 22:13:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:12.738 22:13:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.738 22:13:32 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:12.738 22:13:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.738 22:13:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:12.738 [ 00:30:12.738 { 00:30:12.738 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:12.738 "subtype": "Discovery", 00:30:12.738 "listen_addresses": [], 00:30:12.738 "allow_any_host": true, 00:30:12.738 "hosts": [] 00:30:12.738 }, 00:30:12.738 { 00:30:12.738 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:12.738 "subtype": "NVMe", 00:30:12.738 "listen_addresses": [ 00:30:12.738 { 00:30:12.738 "trtype": "TCP", 00:30:12.738 "adrfam": "IPv4", 00:30:12.738 "traddr": "10.0.0.2", 00:30:12.738 "trsvcid": "4420" 00:30:12.738 } 00:30:12.738 ], 00:30:12.738 "allow_any_host": true, 00:30:12.738 "hosts": [], 00:30:12.738 "serial_number": "SPDK00000000000001", 00:30:12.738 "model_number": "SPDK bdev Controller", 00:30:12.738 "max_namespaces": 2, 00:30:12.738 "min_cntlid": 1, 00:30:12.738 "max_cntlid": 65519, 00:30:12.738 "namespaces": [ 00:30:12.738 { 00:30:12.738 "nsid": 1, 00:30:12.738 "bdev_name": "Malloc0", 00:30:12.738 "name": "Malloc0", 00:30:12.738 "nguid": "5152917A5C034EB09272DDD88B8A2A01", 00:30:12.738 "uuid": "5152917a-5c03-4eb0-9272-ddd88b8a2a01" 00:30:12.738 }, 00:30:12.738 { 00:30:12.738 "nsid": 2, 00:30:12.738 "bdev_name": "Malloc1", 00:30:12.738 "name": "Malloc1", 00:30:12.738 "nguid": "FCA610800700414FAC8A2A3EC9E322E3", 00:30:12.738 "uuid": "fca61080-0700-414f-ac8a-2a3ec9e322e3" 00:30:12.738 } 00:30:12.738 ] 00:30:12.738 } 00:30:12.738 ] 00:30:12.738 22:13:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.738 22:13:32 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 4174413 00:30:12.738 Asynchronous Event Request test 00:30:12.738 Attaching to 10.0.0.2 00:30:12.738 Attached to 10.0.0.2 00:30:12.738 Registering asynchronous event callbacks... 00:30:12.738 Starting namespace attribute notice tests for all controllers... 00:30:12.739 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:12.739 aer_cb - Changed Namespace 00:30:12.739 Cleaning up... 00:30:12.739 22:13:32 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:12.739 22:13:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.739 22:13:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:12.995 22:13:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.995 22:13:32 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:12.996 22:13:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.996 22:13:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:13.252 rmmod nvme_tcp 00:30:13.252 rmmod nvme_fabrics 00:30:13.252 rmmod nvme_keyring 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 4174255 ']' 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 4174255 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 4174255 ']' 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 4174255 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4174255 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4174255' 00:30:13.252 killing process with pid 4174255 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 4174255 00:30:13.252 22:13:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 4174255 00:30:14.661 22:13:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:14.661 22:13:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:14.661 22:13:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:14.661 22:13:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:14.661 22:13:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:14.661 22:13:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.661 22:13:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:14.661 22:13:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.560 22:13:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:16.560 00:30:16.560 real 0m7.894s 00:30:16.560 user 0m11.884s 00:30:16.560 sys 0m2.253s 00:30:16.560 22:13:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:16.560 22:13:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:16.560 ************************************ 00:30:16.560 END TEST nvmf_aer 00:30:16.560 ************************************ 00:30:16.560 22:13:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:16.560 22:13:35 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:16.560 22:13:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:16.560 22:13:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:16.560 22:13:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:16.560 ************************************ 00:30:16.560 START TEST nvmf_async_init 00:30:16.560 ************************************ 00:30:16.560 22:13:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:16.560 * Looking for test storage... 00:30:16.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=3081bf1425114fb7948d53cdb8f516bc 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:30:16.561 22:13:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:18.464 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:18.464 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:18.464 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:18.464 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:18.464 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:18.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:18.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:30:18.723 00:30:18.723 --- 10.0.0.2 ping statistics --- 00:30:18.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.723 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:18.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:18.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:30:18.723 00:30:18.723 --- 10.0.0.1 ping statistics --- 00:30:18.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:18.723 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=4176619 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 4176619 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 4176619 ']' 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:18.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:18.723 22:13:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:18.723 [2024-07-13 22:13:38.079108] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:18.723 [2024-07-13 22:13:38.079268] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:18.981 EAL: No free 2048 kB hugepages reported on node 1 00:30:18.981 [2024-07-13 22:13:38.213011] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.241 [2024-07-13 22:13:38.438448] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:19.241 [2024-07-13 22:13:38.438517] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:19.241 [2024-07-13 22:13:38.438556] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:19.241 [2024-07-13 22:13:38.438577] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:19.241 [2024-07-13 22:13:38.438596] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:19.241 [2024-07-13 22:13:38.438639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.806 22:13:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:19.806 22:13:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:30:19.806 22:13:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:19.806 22:13:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:19.806 22:13:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:19.806 [2024-07-13 22:13:39.008404] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:19.806 null0 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 3081bf1425114fb7948d53cdb8f516bc 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:19.806 [2024-07-13 22:13:39.048679] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:19.806 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:20.064 nvme0n1 00:30:20.064 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.064 22:13:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:20.064 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.064 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:20.064 [ 00:30:20.064 { 00:30:20.064 "name": "nvme0n1", 00:30:20.064 "aliases": [ 00:30:20.064 "3081bf14-2511-4fb7-948d-53cdb8f516bc" 00:30:20.064 ], 00:30:20.064 "product_name": "NVMe disk", 00:30:20.064 "block_size": 512, 00:30:20.064 "num_blocks": 2097152, 00:30:20.064 "uuid": "3081bf14-2511-4fb7-948d-53cdb8f516bc", 00:30:20.064 "assigned_rate_limits": { 00:30:20.064 "rw_ios_per_sec": 0, 00:30:20.064 "rw_mbytes_per_sec": 0, 00:30:20.064 "r_mbytes_per_sec": 0, 00:30:20.064 "w_mbytes_per_sec": 0 00:30:20.064 }, 00:30:20.064 "claimed": false, 00:30:20.064 "zoned": false, 00:30:20.064 "supported_io_types": { 00:30:20.064 "read": true, 00:30:20.064 "write": true, 00:30:20.064 "unmap": false, 00:30:20.064 "flush": true, 00:30:20.064 "reset": true, 00:30:20.064 "nvme_admin": true, 00:30:20.064 "nvme_io": true, 00:30:20.064 "nvme_io_md": false, 00:30:20.064 "write_zeroes": true, 00:30:20.064 "zcopy": false, 00:30:20.064 "get_zone_info": false, 00:30:20.064 "zone_management": false, 00:30:20.064 "zone_append": false, 00:30:20.064 "compare": true, 00:30:20.064 "compare_and_write": true, 00:30:20.064 "abort": true, 00:30:20.064 "seek_hole": false, 00:30:20.064 "seek_data": false, 00:30:20.064 "copy": true, 00:30:20.064 "nvme_iov_md": false 00:30:20.064 }, 00:30:20.064 "memory_domains": [ 00:30:20.064 { 00:30:20.064 "dma_device_id": "system", 00:30:20.064 "dma_device_type": 1 00:30:20.064 } 00:30:20.064 ], 00:30:20.064 "driver_specific": { 00:30:20.064 "nvme": [ 00:30:20.064 { 00:30:20.064 "trid": { 00:30:20.064 "trtype": "TCP", 00:30:20.064 "adrfam": "IPv4", 00:30:20.064 "traddr": "10.0.0.2", 00:30:20.064 "trsvcid": "4420", 00:30:20.064 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:20.064 }, 00:30:20.064 "ctrlr_data": { 00:30:20.064 "cntlid": 1, 00:30:20.064 "vendor_id": "0x8086", 00:30:20.064 "model_number": "SPDK bdev Controller", 00:30:20.064 "serial_number": "00000000000000000000", 00:30:20.064 "firmware_revision": "24.09", 00:30:20.064 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:20.064 "oacs": { 00:30:20.064 "security": 0, 00:30:20.064 "format": 0, 00:30:20.064 "firmware": 0, 00:30:20.064 "ns_manage": 0 00:30:20.064 }, 00:30:20.064 "multi_ctrlr": true, 00:30:20.064 "ana_reporting": false 00:30:20.064 }, 00:30:20.064 "vs": { 00:30:20.064 "nvme_version": "1.3" 00:30:20.064 }, 00:30:20.064 "ns_data": { 00:30:20.064 "id": 1, 00:30:20.064 "can_share": true 00:30:20.064 } 00:30:20.064 } 00:30:20.064 ], 00:30:20.064 "mp_policy": "active_passive" 00:30:20.064 } 00:30:20.064 } 00:30:20.064 ] 00:30:20.064 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.064 22:13:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:20.064 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.065 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:20.065 [2024-07-13 22:13:39.305312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:20.065 [2024-07-13 22:13:39.305436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2780 (9): Bad file descriptor 00:30:20.065 [2024-07-13 22:13:39.438093] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:20.065 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.065 22:13:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:20.065 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.065 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:20.065 [ 00:30:20.065 { 00:30:20.065 "name": "nvme0n1", 00:30:20.065 "aliases": [ 00:30:20.065 "3081bf14-2511-4fb7-948d-53cdb8f516bc" 00:30:20.065 ], 00:30:20.065 "product_name": "NVMe disk", 00:30:20.065 "block_size": 512, 00:30:20.065 "num_blocks": 2097152, 00:30:20.065 "uuid": "3081bf14-2511-4fb7-948d-53cdb8f516bc", 00:30:20.065 "assigned_rate_limits": { 00:30:20.065 "rw_ios_per_sec": 0, 00:30:20.065 "rw_mbytes_per_sec": 0, 00:30:20.065 "r_mbytes_per_sec": 0, 00:30:20.065 "w_mbytes_per_sec": 0 00:30:20.065 }, 00:30:20.065 "claimed": false, 00:30:20.065 "zoned": false, 00:30:20.065 "supported_io_types": { 00:30:20.065 "read": true, 00:30:20.065 "write": true, 00:30:20.065 "unmap": false, 00:30:20.065 "flush": true, 00:30:20.065 "reset": true, 00:30:20.065 "nvme_admin": true, 00:30:20.065 "nvme_io": true, 00:30:20.065 "nvme_io_md": false, 00:30:20.065 "write_zeroes": true, 00:30:20.065 "zcopy": false, 00:30:20.065 "get_zone_info": false, 00:30:20.065 "zone_management": false, 00:30:20.065 "zone_append": false, 00:30:20.065 "compare": true, 00:30:20.065 "compare_and_write": true, 00:30:20.065 "abort": true, 00:30:20.065 "seek_hole": false, 00:30:20.065 "seek_data": false, 00:30:20.065 "copy": true, 00:30:20.065 "nvme_iov_md": false 00:30:20.065 }, 00:30:20.065 "memory_domains": [ 00:30:20.065 { 00:30:20.065 "dma_device_id": "system", 00:30:20.065 "dma_device_type": 1 00:30:20.065 } 00:30:20.065 ], 00:30:20.065 "driver_specific": { 00:30:20.065 "nvme": [ 00:30:20.065 { 00:30:20.065 "trid": { 00:30:20.065 "trtype": "TCP", 00:30:20.065 "adrfam": "IPv4", 00:30:20.065 "traddr": "10.0.0.2", 00:30:20.065 "trsvcid": "4420", 00:30:20.065 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:20.065 }, 00:30:20.065 "ctrlr_data": { 00:30:20.065 "cntlid": 2, 00:30:20.065 "vendor_id": "0x8086", 00:30:20.065 "model_number": "SPDK bdev Controller", 00:30:20.065 "serial_number": "00000000000000000000", 00:30:20.065 "firmware_revision": "24.09", 00:30:20.065 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:20.065 "oacs": { 00:30:20.065 "security": 0, 00:30:20.065 "format": 0, 00:30:20.065 "firmware": 0, 00:30:20.065 "ns_manage": 0 00:30:20.065 }, 00:30:20.065 "multi_ctrlr": true, 00:30:20.065 "ana_reporting": false 00:30:20.065 }, 00:30:20.065 "vs": { 00:30:20.065 "nvme_version": "1.3" 00:30:20.065 }, 00:30:20.065 "ns_data": { 00:30:20.065 "id": 1, 00:30:20.065 "can_share": true 00:30:20.065 } 00:30:20.065 } 00:30:20.065 ], 00:30:20.065 "mp_policy": "active_passive" 00:30:20.065 } 00:30:20.065 } 00:30:20.065 ] 00:30:20.065 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.065 22:13:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:20.065 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.065 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:20.323 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.XhtGjYNIEz 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.XhtGjYNIEz 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:20.324 [2024-07-13 22:13:39.490061] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:20.324 [2024-07-13 22:13:39.490266] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XhtGjYNIEz 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:20.324 [2024-07-13 22:13:39.498054] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XhtGjYNIEz 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:20.324 [2024-07-13 22:13:39.506081] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:20.324 [2024-07-13 22:13:39.506196] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:30:20.324 nvme0n1 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:20.324 [ 00:30:20.324 { 00:30:20.324 "name": "nvme0n1", 00:30:20.324 "aliases": [ 00:30:20.324 "3081bf14-2511-4fb7-948d-53cdb8f516bc" 00:30:20.324 ], 00:30:20.324 "product_name": "NVMe disk", 00:30:20.324 "block_size": 512, 00:30:20.324 "num_blocks": 2097152, 00:30:20.324 "uuid": "3081bf14-2511-4fb7-948d-53cdb8f516bc", 00:30:20.324 "assigned_rate_limits": { 00:30:20.324 "rw_ios_per_sec": 0, 00:30:20.324 "rw_mbytes_per_sec": 0, 00:30:20.324 "r_mbytes_per_sec": 0, 00:30:20.324 "w_mbytes_per_sec": 0 00:30:20.324 }, 00:30:20.324 "claimed": false, 00:30:20.324 "zoned": false, 00:30:20.324 "supported_io_types": { 00:30:20.324 "read": true, 00:30:20.324 "write": true, 00:30:20.324 "unmap": false, 00:30:20.324 "flush": true, 00:30:20.324 "reset": true, 00:30:20.324 "nvme_admin": true, 00:30:20.324 "nvme_io": true, 00:30:20.324 "nvme_io_md": false, 00:30:20.324 "write_zeroes": true, 00:30:20.324 "zcopy": false, 00:30:20.324 "get_zone_info": false, 00:30:20.324 "zone_management": false, 00:30:20.324 "zone_append": false, 00:30:20.324 "compare": true, 00:30:20.324 "compare_and_write": true, 00:30:20.324 "abort": true, 00:30:20.324 "seek_hole": false, 00:30:20.324 "seek_data": false, 00:30:20.324 "copy": true, 00:30:20.324 "nvme_iov_md": false 00:30:20.324 }, 00:30:20.324 "memory_domains": [ 00:30:20.324 { 00:30:20.324 "dma_device_id": "system", 00:30:20.324 "dma_device_type": 1 00:30:20.324 } 00:30:20.324 ], 00:30:20.324 "driver_specific": { 00:30:20.324 "nvme": [ 00:30:20.324 { 00:30:20.324 "trid": { 00:30:20.324 "trtype": "TCP", 00:30:20.324 "adrfam": "IPv4", 00:30:20.324 "traddr": "10.0.0.2", 00:30:20.324 "trsvcid": "4421", 00:30:20.324 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:20.324 }, 00:30:20.324 "ctrlr_data": { 00:30:20.324 "cntlid": 3, 00:30:20.324 "vendor_id": "0x8086", 00:30:20.324 "model_number": "SPDK bdev Controller", 00:30:20.324 "serial_number": "00000000000000000000", 00:30:20.324 "firmware_revision": "24.09", 00:30:20.324 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:20.324 "oacs": { 00:30:20.324 "security": 0, 00:30:20.324 "format": 0, 00:30:20.324 "firmware": 0, 00:30:20.324 "ns_manage": 0 00:30:20.324 }, 00:30:20.324 "multi_ctrlr": true, 00:30:20.324 "ana_reporting": false 00:30:20.324 }, 00:30:20.324 "vs": { 00:30:20.324 "nvme_version": "1.3" 00:30:20.324 }, 00:30:20.324 "ns_data": { 00:30:20.324 "id": 1, 00:30:20.324 "can_share": true 00:30:20.324 } 00:30:20.324 } 00:30:20.324 ], 00:30:20.324 "mp_policy": "active_passive" 00:30:20.324 } 00:30:20.324 } 00:30:20.324 ] 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.XhtGjYNIEz 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:20.324 rmmod nvme_tcp 00:30:20.324 rmmod nvme_fabrics 00:30:20.324 rmmod nvme_keyring 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 4176619 ']' 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 4176619 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 4176619 ']' 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 4176619 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4176619 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4176619' 00:30:20.324 killing process with pid 4176619 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 4176619 00:30:20.324 [2024-07-13 22:13:39.702080] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:30:20.324 [2024-07-13 22:13:39.702143] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:20.324 22:13:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 4176619 00:30:21.698 22:13:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:21.698 22:13:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:21.698 22:13:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:21.698 22:13:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:21.698 22:13:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:21.698 22:13:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.698 22:13:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:21.698 22:13:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:24.228 22:13:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:24.228 00:30:24.228 real 0m7.216s 00:30:24.228 user 0m3.897s 00:30:24.228 sys 0m1.915s 00:30:24.228 22:13:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:24.228 22:13:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:24.228 ************************************ 00:30:24.228 END TEST nvmf_async_init 00:30:24.228 ************************************ 00:30:24.228 22:13:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:24.228 22:13:43 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:24.228 22:13:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:24.228 22:13:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:24.228 22:13:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:24.228 ************************************ 00:30:24.228 START TEST dma 00:30:24.228 ************************************ 00:30:24.228 22:13:43 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:24.228 * Looking for test storage... 00:30:24.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:24.228 22:13:43 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:24.228 22:13:43 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:30:24.228 22:13:43 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:24.228 22:13:43 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:24.228 22:13:43 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:24.228 22:13:43 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:24.228 22:13:43 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:24.228 22:13:43 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:24.229 22:13:43 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:24.229 22:13:43 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:24.229 22:13:43 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:24.229 22:13:43 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:24.229 22:13:43 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:24.229 22:13:43 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:24.229 22:13:43 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:24.229 22:13:43 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:24.229 22:13:43 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:24.229 22:13:43 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:24.229 22:13:43 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:24.229 22:13:43 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:24.229 22:13:43 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:24.229 22:13:43 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:24.229 22:13:43 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.229 22:13:43 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.229 22:13:43 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.229 22:13:43 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:30:24.229 22:13:43 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.229 22:13:43 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:30:24.229 22:13:43 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:24.229 22:13:43 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:24.229 22:13:43 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:24.229 22:13:43 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:24.229 22:13:43 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:24.229 22:13:43 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:24.229 22:13:43 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:24.229 22:13:43 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:24.229 22:13:43 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:24.229 22:13:43 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:30:24.229 00:30:24.229 real 0m0.065s 00:30:24.229 user 0m0.027s 00:30:24.229 sys 0m0.043s 00:30:24.229 22:13:43 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:24.229 22:13:43 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:30:24.229 ************************************ 00:30:24.229 END TEST dma 00:30:24.229 ************************************ 00:30:24.229 22:13:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:24.229 22:13:43 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:24.229 22:13:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:24.229 22:13:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:24.229 22:13:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:24.229 ************************************ 00:30:24.229 START TEST nvmf_identify 00:30:24.229 ************************************ 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:24.229 * Looking for test storage... 00:30:24.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:30:24.229 22:13:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:26.130 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:26.130 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:26.131 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:26.131 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:26.131 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:26.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:26.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:30:26.131 00:30:26.131 --- 10.0.0.2 ping statistics --- 00:30:26.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.131 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:26.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:26.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:30:26.131 00:30:26.131 --- 10.0.0.1 ping statistics --- 00:30:26.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.131 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=4178997 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 4178997 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 4178997 ']' 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:26.131 22:13:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:26.131 [2024-07-13 22:13:45.494828] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:26.131 [2024-07-13 22:13:45.495006] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:26.390 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.390 [2024-07-13 22:13:45.637406] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:26.648 [2024-07-13 22:13:45.903121] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:26.648 [2024-07-13 22:13:45.903203] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:26.648 [2024-07-13 22:13:45.903232] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:26.648 [2024-07-13 22:13:45.903254] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:26.648 [2024-07-13 22:13:45.903277] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:26.648 [2024-07-13 22:13:45.903401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.648 [2024-07-13 22:13:45.903471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:26.648 [2024-07-13 22:13:45.903553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.648 [2024-07-13 22:13:45.903563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:27.214 [2024-07-13 22:13:46.442146] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:27.214 Malloc0 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:27.214 [2024-07-13 22:13:46.572269] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.214 22:13:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:27.214 [ 00:30:27.214 { 00:30:27.214 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:27.214 "subtype": "Discovery", 00:30:27.214 "listen_addresses": [ 00:30:27.214 { 00:30:27.214 "trtype": "TCP", 00:30:27.214 "adrfam": "IPv4", 00:30:27.214 "traddr": "10.0.0.2", 00:30:27.214 "trsvcid": "4420" 00:30:27.214 } 00:30:27.214 ], 00:30:27.214 "allow_any_host": true, 00:30:27.214 "hosts": [] 00:30:27.214 }, 00:30:27.214 { 00:30:27.215 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:27.215 "subtype": "NVMe", 00:30:27.215 "listen_addresses": [ 00:30:27.215 { 00:30:27.215 "trtype": "TCP", 00:30:27.215 "adrfam": "IPv4", 00:30:27.215 "traddr": "10.0.0.2", 00:30:27.215 "trsvcid": "4420" 00:30:27.215 } 00:30:27.215 ], 00:30:27.215 "allow_any_host": true, 00:30:27.215 "hosts": [], 00:30:27.215 "serial_number": "SPDK00000000000001", 00:30:27.215 "model_number": "SPDK bdev Controller", 00:30:27.215 "max_namespaces": 32, 00:30:27.215 "min_cntlid": 1, 00:30:27.215 "max_cntlid": 65519, 00:30:27.215 "namespaces": [ 00:30:27.215 { 00:30:27.215 "nsid": 1, 00:30:27.215 "bdev_name": "Malloc0", 00:30:27.215 "name": "Malloc0", 00:30:27.215 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:27.215 "eui64": "ABCDEF0123456789", 00:30:27.215 "uuid": "b901a19b-bcd7-4b17-825c-4378e94822d8" 00:30:27.215 } 00:30:27.215 ] 00:30:27.215 } 00:30:27.215 ] 00:30:27.215 22:13:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.215 22:13:46 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:27.475 [2024-07-13 22:13:46.637907] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:27.475 [2024-07-13 22:13:46.637999] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4179152 ] 00:30:27.475 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.475 [2024-07-13 22:13:46.695184] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:30:27.475 [2024-07-13 22:13:46.695314] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:27.475 [2024-07-13 22:13:46.695340] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:27.475 [2024-07-13 22:13:46.695370] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:27.475 [2024-07-13 22:13:46.695392] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:27.475 [2024-07-13 22:13:46.698954] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:30:27.475 [2024-07-13 22:13:46.699025] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:27.475 [2024-07-13 22:13:46.706742] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:27.475 [2024-07-13 22:13:46.706774] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:27.475 [2024-07-13 22:13:46.706789] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:27.475 [2024-07-13 22:13:46.706800] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:27.475 [2024-07-13 22:13:46.706878] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.475 [2024-07-13 22:13:46.706900] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.475 [2024-07-13 22:13:46.706918] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.475 [2024-07-13 22:13:46.706949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:27.475 [2024-07-13 22:13:46.706992] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.475 [2024-07-13 22:13:46.712893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.475 [2024-07-13 22:13:46.712922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.475 [2024-07-13 22:13:46.712937] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.475 [2024-07-13 22:13:46.712950] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.475 [2024-07-13 22:13:46.712981] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:27.475 [2024-07-13 22:13:46.713009] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:30:27.475 [2024-07-13 22:13:46.713030] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:30:27.475 [2024-07-13 22:13:46.713060] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.475 [2024-07-13 22:13:46.713075] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.475 [2024-07-13 22:13:46.713086] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.475 [2024-07-13 22:13:46.713108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.475 [2024-07-13 22:13:46.713143] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.475 [2024-07-13 22:13:46.713329] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.475 [2024-07-13 22:13:46.713353] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.475 [2024-07-13 22:13:46.713365] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.475 [2024-07-13 22:13:46.713383] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.475 [2024-07-13 22:13:46.713399] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:30:27.475 [2024-07-13 22:13:46.713421] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:30:27.475 [2024-07-13 22:13:46.713457] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.475 [2024-07-13 22:13:46.713470] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.475 [2024-07-13 22:13:46.713481] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.475 [2024-07-13 22:13:46.713505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.475 [2024-07-13 22:13:46.713558] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.475 [2024-07-13 22:13:46.713713] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.475 [2024-07-13 22:13:46.713734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.475 [2024-07-13 22:13:46.713745] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.475 [2024-07-13 22:13:46.713759] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.475 [2024-07-13 22:13:46.713775] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:30:27.475 [2024-07-13 22:13:46.713802] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:30:27.475 [2024-07-13 22:13:46.713823] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.475 [2024-07-13 22:13:46.713836] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.475 [2024-07-13 22:13:46.713863] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.475 [2024-07-13 22:13:46.713892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.475 [2024-07-13 22:13:46.713939] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.475 [2024-07-13 22:13:46.714105] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.475 [2024-07-13 22:13:46.714127] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.475 [2024-07-13 22:13:46.714139] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.475 [2024-07-13 22:13:46.714150] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.475 [2024-07-13 22:13:46.714173] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:27.475 [2024-07-13 22:13:46.714204] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.476 [2024-07-13 22:13:46.714221] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.476 [2024-07-13 22:13:46.714248] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.476 [2024-07-13 22:13:46.714272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.476 [2024-07-13 22:13:46.714302] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.476 [2024-07-13 22:13:46.714486] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.476 [2024-07-13 22:13:46.714509] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.476 [2024-07-13 22:13:46.714520] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.476 [2024-07-13 22:13:46.714531] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.476 [2024-07-13 22:13:46.714545] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:30:27.476 [2024-07-13 22:13:46.714559] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:30:27.476 [2024-07-13 22:13:46.714579] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:27.476 [2024-07-13 22:13:46.714696] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:30:27.476 [2024-07-13 22:13:46.714710] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:27.476 [2024-07-13 22:13:46.714732] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.476 [2024-07-13 22:13:46.714746] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.476 [2024-07-13 22:13:46.714756] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.476 [2024-07-13 22:13:46.714785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.476 [2024-07-13 22:13:46.714836] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.476 [2024-07-13 22:13:46.715012] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.476 [2024-07-13 22:13:46.715034] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.476 [2024-07-13 22:13:46.715046] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.476 [2024-07-13 22:13:46.715057] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.476 [2024-07-13 22:13:46.715071] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:27.476 [2024-07-13 22:13:46.715103] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.476 [2024-07-13 22:13:46.715120] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.476 [2024-07-13 22:13:46.715131] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.476 [2024-07-13 22:13:46.715150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.476 [2024-07-13 22:13:46.715208] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.476 [2024-07-13 22:13:46.715380] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.476 [2024-07-13 22:13:46.715401] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.476 [2024-07-13 22:13:46.715416] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.476 [2024-07-13 22:13:46.715428] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.476 [2024-07-13 22:13:46.715445] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:27.476 [2024-07-13 22:13:46.715470] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:30:27.476 [2024-07-13 22:13:46.715500] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:30:27.476 [2024-07-13 22:13:46.715523] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:30:27.476 [2024-07-13 22:13:46.715564] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.476 [2024-07-13 22:13:46.715579] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.476 [2024-07-13 22:13:46.715598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.476 [2024-07-13 22:13:46.715628] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.476 [2024-07-13 22:13:46.715856] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:27.476 [2024-07-13 22:13:46.715890] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:27.476 [2024-07-13 22:13:46.715904] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:27.476 [2024-07-13 22:13:46.715916] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:27.476 [2024-07-13 22:13:46.715930] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:27.476 [2024-07-13 22:13:46.715942] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.476 [2024-07-13 22:13:46.715970] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:27.476 [2024-07-13 22:13:46.715991] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:27.476 [2024-07-13 22:13:46.757886] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.476 [2024-07-13 22:13:46.757916] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.476 [2024-07-13 22:13:46.757929] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.476 [2024-07-13 22:13:46.757940] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.476 [2024-07-13 22:13:46.757970] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:30:27.476 [2024-07-13 22:13:46.757987] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:30:27.476 [2024-07-13 22:13:46.758003] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:30:27.476 [2024-07-13 22:13:46.758017] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:30:27.476 [2024-07-13 22:13:46.758032] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:30:27.476 [2024-07-13 22:13:46.758046] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:30:27.476 [2024-07-13 22:13:46.758069] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:30:27.476 [2024-07-13 22:13:46.758110] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.476 [2024-07-13 22:13:46.758129] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.476 [2024-07-13 22:13:46.758144] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.476 [2024-07-13 22:13:46.758186] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:27.476 [2024-07-13 22:13:46.758221] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.476 [2024-07-13 22:13:46.758413] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.476 [2024-07-13 22:13:46.758435] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.476 [2024-07-13 22:13:46.758446] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.476 [2024-07-13 22:13:46.758457] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.476 [2024-07-13 22:13:46.758476] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.476 [2024-07-13 22:13:46.758489] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.476 [2024-07-13 22:13:46.758500] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.476 [2024-07-13 22:13:46.758519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:27.476 [2024-07-13 22:13:46.758561] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.476 [2024-07-13 22:13:46.758573] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.476 [2024-07-13 22:13:46.758582] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:27.476 [2024-07-13 22:13:46.758598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:27.476 [2024-07-13 22:13:46.758617] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.477 [2024-07-13 22:13:46.758629] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.477 [2024-07-13 22:13:46.758639] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:27.477 [2024-07-13 22:13:46.758654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:27.477 [2024-07-13 22:13:46.758669] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.477 [2024-07-13 22:13:46.758695] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.477 [2024-07-13 22:13:46.758704] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.477 [2024-07-13 22:13:46.758719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:27.477 [2024-07-13 22:13:46.758732] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:30:27.477 [2024-07-13 22:13:46.758773] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:27.477 [2024-07-13 22:13:46.758797] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.477 [2024-07-13 22:13:46.758810] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:27.477 [2024-07-13 22:13:46.758832] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.477 [2024-07-13 22:13:46.758890] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.477 [2024-07-13 22:13:46.758910] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:27.477 [2024-07-13 22:13:46.758922] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:27.477 [2024-07-13 22:13:46.758934] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.477 [2024-07-13 22:13:46.758946] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:27.477 [2024-07-13 22:13:46.759148] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.477 [2024-07-13 22:13:46.759173] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.477 [2024-07-13 22:13:46.759201] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.477 [2024-07-13 22:13:46.759212] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:27.477 [2024-07-13 22:13:46.759227] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:30:27.477 [2024-07-13 22:13:46.759241] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:30:27.477 [2024-07-13 22:13:46.759273] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.477 [2024-07-13 22:13:46.759288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:27.477 [2024-07-13 22:13:46.759307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.477 [2024-07-13 22:13:46.759336] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:27.477 [2024-07-13 22:13:46.759542] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:27.477 [2024-07-13 22:13:46.759567] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:27.477 [2024-07-13 22:13:46.759580] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:27.477 [2024-07-13 22:13:46.759592] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:27.477 [2024-07-13 22:13:46.759604] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:27.477 [2024-07-13 22:13:46.759616] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.477 [2024-07-13 22:13:46.759650] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:27.477 [2024-07-13 22:13:46.759664] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:27.477 [2024-07-13 22:13:46.759682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.477 [2024-07-13 22:13:46.759705] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.477 [2024-07-13 22:13:46.759717] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.477 [2024-07-13 22:13:46.759728] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:27.477 [2024-07-13 22:13:46.759765] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:30:27.477 [2024-07-13 22:13:46.759828] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.477 [2024-07-13 22:13:46.759845] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:27.477 [2024-07-13 22:13:46.759898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.477 [2024-07-13 22:13:46.759920] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.477 [2024-07-13 22:13:46.759932] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.477 [2024-07-13 22:13:46.759943] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:27.477 [2024-07-13 22:13:46.759960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:27.477 [2024-07-13 22:13:46.759997] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:27.477 [2024-07-13 22:13:46.760016] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:27.477 [2024-07-13 22:13:46.760346] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:27.477 [2024-07-13 22:13:46.760368] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:27.477 [2024-07-13 22:13:46.760392] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:27.477 [2024-07-13 22:13:46.760405] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=1024, cccid=4 00:30:27.477 [2024-07-13 22:13:46.760417] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=1024 00:30:27.477 [2024-07-13 22:13:46.760428] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.477 [2024-07-13 22:13:46.760445] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:27.477 [2024-07-13 22:13:46.760458] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:27.477 [2024-07-13 22:13:46.760476] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.477 [2024-07-13 22:13:46.760492] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.477 [2024-07-13 22:13:46.760502] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.477 [2024-07-13 22:13:46.760513] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:27.477 [2024-07-13 22:13:46.804895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.477 [2024-07-13 22:13:46.804925] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.477 [2024-07-13 22:13:46.804938] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.477 [2024-07-13 22:13:46.804950] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:27.477 [2024-07-13 22:13:46.804987] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.477 [2024-07-13 22:13:46.805004] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:27.477 [2024-07-13 22:13:46.805034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.477 [2024-07-13 22:13:46.805078] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:27.477 [2024-07-13 22:13:46.805313] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:27.477 [2024-07-13 22:13:46.805334] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:27.477 [2024-07-13 22:13:46.805346] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:27.477 [2024-07-13 22:13:46.805357] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=3072, cccid=4 00:30:27.477 [2024-07-13 22:13:46.805385] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=3072 00:30:27.477 [2024-07-13 22:13:46.805397] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.477 [2024-07-13 22:13:46.805414] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:27.477 [2024-07-13 22:13:46.805427] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:27.477 [2024-07-13 22:13:46.805445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.477 [2024-07-13 22:13:46.805462] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.477 [2024-07-13 22:13:46.805473] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.477 [2024-07-13 22:13:46.805483] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:27.478 [2024-07-13 22:13:46.805509] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.478 [2024-07-13 22:13:46.805525] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:27.478 [2024-07-13 22:13:46.805544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.478 [2024-07-13 22:13:46.805612] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:27.478 [2024-07-13 22:13:46.805815] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:27.478 [2024-07-13 22:13:46.805836] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:27.478 [2024-07-13 22:13:46.805864] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:27.478 [2024-07-13 22:13:46.805885] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8, cccid=4 00:30:27.478 [2024-07-13 22:13:46.805898] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=8 00:30:27.478 [2024-07-13 22:13:46.805909] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.478 [2024-07-13 22:13:46.805926] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:27.478 [2024-07-13 22:13:46.805939] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:27.478 [2024-07-13 22:13:46.846036] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.478 [2024-07-13 22:13:46.846064] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.478 [2024-07-13 22:13:46.846076] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.478 [2024-07-13 22:13:46.846088] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:27.478 ===================================================== 00:30:27.478 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:27.478 ===================================================== 00:30:27.478 Controller Capabilities/Features 00:30:27.478 ================================ 00:30:27.478 Vendor ID: 0000 00:30:27.478 Subsystem Vendor ID: 0000 00:30:27.478 Serial Number: .................... 00:30:27.478 Model Number: ........................................ 00:30:27.478 Firmware Version: 24.09 00:30:27.478 Recommended Arb Burst: 0 00:30:27.478 IEEE OUI Identifier: 00 00 00 00:30:27.478 Multi-path I/O 00:30:27.478 May have multiple subsystem ports: No 00:30:27.478 May have multiple controllers: No 00:30:27.478 Associated with SR-IOV VF: No 00:30:27.478 Max Data Transfer Size: 131072 00:30:27.478 Max Number of Namespaces: 0 00:30:27.478 Max Number of I/O Queues: 1024 00:30:27.478 NVMe Specification Version (VS): 1.3 00:30:27.478 NVMe Specification Version (Identify): 1.3 00:30:27.478 Maximum Queue Entries: 128 00:30:27.478 Contiguous Queues Required: Yes 00:30:27.478 Arbitration Mechanisms Supported 00:30:27.478 Weighted Round Robin: Not Supported 00:30:27.478 Vendor Specific: Not Supported 00:30:27.478 Reset Timeout: 15000 ms 00:30:27.478 Doorbell Stride: 4 bytes 00:30:27.478 NVM Subsystem Reset: Not Supported 00:30:27.478 Command Sets Supported 00:30:27.478 NVM Command Set: Supported 00:30:27.478 Boot Partition: Not Supported 00:30:27.478 Memory Page Size Minimum: 4096 bytes 00:30:27.478 Memory Page Size Maximum: 4096 bytes 00:30:27.478 Persistent Memory Region: Not Supported 00:30:27.478 Optional Asynchronous Events Supported 00:30:27.478 Namespace Attribute Notices: Not Supported 00:30:27.478 Firmware Activation Notices: Not Supported 00:30:27.478 ANA Change Notices: Not Supported 00:30:27.478 PLE Aggregate Log Change Notices: Not Supported 00:30:27.478 LBA Status Info Alert Notices: Not Supported 00:30:27.478 EGE Aggregate Log Change Notices: Not Supported 00:30:27.478 Normal NVM Subsystem Shutdown event: Not Supported 00:30:27.478 Zone Descriptor Change Notices: Not Supported 00:30:27.478 Discovery Log Change Notices: Supported 00:30:27.478 Controller Attributes 00:30:27.478 128-bit Host Identifier: Not Supported 00:30:27.478 Non-Operational Permissive Mode: Not Supported 00:30:27.478 NVM Sets: Not Supported 00:30:27.478 Read Recovery Levels: Not Supported 00:30:27.478 Endurance Groups: Not Supported 00:30:27.478 Predictable Latency Mode: Not Supported 00:30:27.478 Traffic Based Keep ALive: Not Supported 00:30:27.478 Namespace Granularity: Not Supported 00:30:27.478 SQ Associations: Not Supported 00:30:27.478 UUID List: Not Supported 00:30:27.478 Multi-Domain Subsystem: Not Supported 00:30:27.478 Fixed Capacity Management: Not Supported 00:30:27.478 Variable Capacity Management: Not Supported 00:30:27.478 Delete Endurance Group: Not Supported 00:30:27.478 Delete NVM Set: Not Supported 00:30:27.478 Extended LBA Formats Supported: Not Supported 00:30:27.478 Flexible Data Placement Supported: Not Supported 00:30:27.478 00:30:27.478 Controller Memory Buffer Support 00:30:27.478 ================================ 00:30:27.478 Supported: No 00:30:27.478 00:30:27.478 Persistent Memory Region Support 00:30:27.478 ================================ 00:30:27.478 Supported: No 00:30:27.478 00:30:27.478 Admin Command Set Attributes 00:30:27.478 ============================ 00:30:27.478 Security Send/Receive: Not Supported 00:30:27.478 Format NVM: Not Supported 00:30:27.478 Firmware Activate/Download: Not Supported 00:30:27.478 Namespace Management: Not Supported 00:30:27.478 Device Self-Test: Not Supported 00:30:27.478 Directives: Not Supported 00:30:27.478 NVMe-MI: Not Supported 00:30:27.478 Virtualization Management: Not Supported 00:30:27.478 Doorbell Buffer Config: Not Supported 00:30:27.478 Get LBA Status Capability: Not Supported 00:30:27.478 Command & Feature Lockdown Capability: Not Supported 00:30:27.478 Abort Command Limit: 1 00:30:27.478 Async Event Request Limit: 4 00:30:27.478 Number of Firmware Slots: N/A 00:30:27.478 Firmware Slot 1 Read-Only: N/A 00:30:27.478 Firmware Activation Without Reset: N/A 00:30:27.478 Multiple Update Detection Support: N/A 00:30:27.478 Firmware Update Granularity: No Information Provided 00:30:27.478 Per-Namespace SMART Log: No 00:30:27.478 Asymmetric Namespace Access Log Page: Not Supported 00:30:27.478 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:27.478 Command Effects Log Page: Not Supported 00:30:27.478 Get Log Page Extended Data: Supported 00:30:27.478 Telemetry Log Pages: Not Supported 00:30:27.478 Persistent Event Log Pages: Not Supported 00:30:27.478 Supported Log Pages Log Page: May Support 00:30:27.478 Commands Supported & Effects Log Page: Not Supported 00:30:27.478 Feature Identifiers & Effects Log Page:May Support 00:30:27.478 NVMe-MI Commands & Effects Log Page: May Support 00:30:27.478 Data Area 4 for Telemetry Log: Not Supported 00:30:27.478 Error Log Page Entries Supported: 128 00:30:27.478 Keep Alive: Not Supported 00:30:27.478 00:30:27.478 NVM Command Set Attributes 00:30:27.478 ========================== 00:30:27.478 Submission Queue Entry Size 00:30:27.478 Max: 1 00:30:27.478 Min: 1 00:30:27.478 Completion Queue Entry Size 00:30:27.478 Max: 1 00:30:27.478 Min: 1 00:30:27.478 Number of Namespaces: 0 00:30:27.478 Compare Command: Not Supported 00:30:27.478 Write Uncorrectable Command: Not Supported 00:30:27.478 Dataset Management Command: Not Supported 00:30:27.479 Write Zeroes Command: Not Supported 00:30:27.479 Set Features Save Field: Not Supported 00:30:27.479 Reservations: Not Supported 00:30:27.479 Timestamp: Not Supported 00:30:27.479 Copy: Not Supported 00:30:27.479 Volatile Write Cache: Not Present 00:30:27.479 Atomic Write Unit (Normal): 1 00:30:27.479 Atomic Write Unit (PFail): 1 00:30:27.479 Atomic Compare & Write Unit: 1 00:30:27.479 Fused Compare & Write: Supported 00:30:27.479 Scatter-Gather List 00:30:27.479 SGL Command Set: Supported 00:30:27.479 SGL Keyed: Supported 00:30:27.479 SGL Bit Bucket Descriptor: Not Supported 00:30:27.479 SGL Metadata Pointer: Not Supported 00:30:27.479 Oversized SGL: Not Supported 00:30:27.479 SGL Metadata Address: Not Supported 00:30:27.479 SGL Offset: Supported 00:30:27.479 Transport SGL Data Block: Not Supported 00:30:27.479 Replay Protected Memory Block: Not Supported 00:30:27.479 00:30:27.479 Firmware Slot Information 00:30:27.479 ========================= 00:30:27.479 Active slot: 0 00:30:27.479 00:30:27.479 00:30:27.479 Error Log 00:30:27.479 ========= 00:30:27.479 00:30:27.479 Active Namespaces 00:30:27.479 ================= 00:30:27.479 Discovery Log Page 00:30:27.479 ================== 00:30:27.479 Generation Counter: 2 00:30:27.479 Number of Records: 2 00:30:27.479 Record Format: 0 00:30:27.479 00:30:27.479 Discovery Log Entry 0 00:30:27.479 ---------------------- 00:30:27.479 Transport Type: 3 (TCP) 00:30:27.479 Address Family: 1 (IPv4) 00:30:27.479 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:27.479 Entry Flags: 00:30:27.479 Duplicate Returned Information: 1 00:30:27.479 Explicit Persistent Connection Support for Discovery: 1 00:30:27.479 Transport Requirements: 00:30:27.479 Secure Channel: Not Required 00:30:27.479 Port ID: 0 (0x0000) 00:30:27.479 Controller ID: 65535 (0xffff) 00:30:27.479 Admin Max SQ Size: 128 00:30:27.479 Transport Service Identifier: 4420 00:30:27.479 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:27.479 Transport Address: 10.0.0.2 00:30:27.479 Discovery Log Entry 1 00:30:27.479 ---------------------- 00:30:27.479 Transport Type: 3 (TCP) 00:30:27.479 Address Family: 1 (IPv4) 00:30:27.479 Subsystem Type: 2 (NVM Subsystem) 00:30:27.479 Entry Flags: 00:30:27.479 Duplicate Returned Information: 0 00:30:27.479 Explicit Persistent Connection Support for Discovery: 0 00:30:27.479 Transport Requirements: 00:30:27.479 Secure Channel: Not Required 00:30:27.479 Port ID: 0 (0x0000) 00:30:27.479 Controller ID: 65535 (0xffff) 00:30:27.479 Admin Max SQ Size: 128 00:30:27.479 Transport Service Identifier: 4420 00:30:27.479 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:27.479 Transport Address: 10.0.0.2 [2024-07-13 22:13:46.846285] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:30:27.479 [2024-07-13 22:13:46.846330] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.479 [2024-07-13 22:13:46.846353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.479 [2024-07-13 22:13:46.846367] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:27.479 [2024-07-13 22:13:46.846381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.479 [2024-07-13 22:13:46.846393] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:27.479 [2024-07-13 22:13:46.846406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.479 [2024-07-13 22:13:46.846417] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.479 [2024-07-13 22:13:46.846430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.479 [2024-07-13 22:13:46.846456] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.479 [2024-07-13 22:13:46.846471] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.479 [2024-07-13 22:13:46.846486] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.479 [2024-07-13 22:13:46.846506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.479 [2024-07-13 22:13:46.846557] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.479 [2024-07-13 22:13:46.846717] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.479 [2024-07-13 22:13:46.846740] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.479 [2024-07-13 22:13:46.846753] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.479 [2024-07-13 22:13:46.846764] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.479 [2024-07-13 22:13:46.846785] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.479 [2024-07-13 22:13:46.846799] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.479 [2024-07-13 22:13:46.846817] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.479 [2024-07-13 22:13:46.846854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.479 [2024-07-13 22:13:46.846930] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.479 [2024-07-13 22:13:46.847117] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.479 [2024-07-13 22:13:46.847138] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.479 [2024-07-13 22:13:46.847149] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.479 [2024-07-13 22:13:46.847160] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.479 [2024-07-13 22:13:46.847182] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:30:27.479 [2024-07-13 22:13:46.847195] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:30:27.479 [2024-07-13 22:13:46.847220] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.479 [2024-07-13 22:13:46.847251] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.479 [2024-07-13 22:13:46.847262] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.479 [2024-07-13 22:13:46.847280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.479 [2024-07-13 22:13:46.847317] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.479 [2024-07-13 22:13:46.847493] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.479 [2024-07-13 22:13:46.847513] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.479 [2024-07-13 22:13:46.847525] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.479 [2024-07-13 22:13:46.847535] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.479 [2024-07-13 22:13:46.847562] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.479 [2024-07-13 22:13:46.847577] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.479 [2024-07-13 22:13:46.847588] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.479 [2024-07-13 22:13:46.847606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.479 [2024-07-13 22:13:46.847651] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.480 [2024-07-13 22:13:46.847824] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.480 [2024-07-13 22:13:46.847856] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.480 [2024-07-13 22:13:46.847878] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.480 [2024-07-13 22:13:46.847891] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.480 [2024-07-13 22:13:46.847917] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.480 [2024-07-13 22:13:46.847933] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.480 [2024-07-13 22:13:46.847943] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.480 [2024-07-13 22:13:46.847961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.480 [2024-07-13 22:13:46.847991] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.480 [2024-07-13 22:13:46.848151] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.480 [2024-07-13 22:13:46.848172] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.480 [2024-07-13 22:13:46.848183] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.480 [2024-07-13 22:13:46.848194] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.480 [2024-07-13 22:13:46.848219] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.480 [2024-07-13 22:13:46.848234] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.480 [2024-07-13 22:13:46.848245] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.480 [2024-07-13 22:13:46.848262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.480 [2024-07-13 22:13:46.848311] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.480 [2024-07-13 22:13:46.848475] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.480 [2024-07-13 22:13:46.848495] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.480 [2024-07-13 22:13:46.848507] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.480 [2024-07-13 22:13:46.848517] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.480 [2024-07-13 22:13:46.848543] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.480 [2024-07-13 22:13:46.848558] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.480 [2024-07-13 22:13:46.848569] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.480 [2024-07-13 22:13:46.848586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.480 [2024-07-13 22:13:46.848615] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.480 [2024-07-13 22:13:46.848776] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.480 [2024-07-13 22:13:46.848805] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.480 [2024-07-13 22:13:46.848818] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.480 [2024-07-13 22:13:46.848829] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.480 [2024-07-13 22:13:46.848855] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.480 [2024-07-13 22:13:46.852887] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.480 [2024-07-13 22:13:46.852903] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.480 [2024-07-13 22:13:46.852922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.480 [2024-07-13 22:13:46.852954] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.480 [2024-07-13 22:13:46.853158] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.480 [2024-07-13 22:13:46.853178] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.480 [2024-07-13 22:13:46.853190] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.480 [2024-07-13 22:13:46.853200] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.480 [2024-07-13 22:13:46.853222] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:30:27.741 00:30:27.741 22:13:46 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:27.741 [2024-07-13 22:13:46.956813] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:27.741 [2024-07-13 22:13:46.956943] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4179159 ] 00:30:27.741 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.741 [2024-07-13 22:13:47.015139] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:30:27.741 [2024-07-13 22:13:47.015274] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:27.741 [2024-07-13 22:13:47.015299] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:27.741 [2024-07-13 22:13:47.015333] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:27.741 [2024-07-13 22:13:47.015355] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:27.741 [2024-07-13 22:13:47.018955] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:30:27.741 [2024-07-13 22:13:47.019035] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000015700 0 00:30:27.741 [2024-07-13 22:13:47.026902] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:27.741 [2024-07-13 22:13:47.026933] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:27.741 [2024-07-13 22:13:47.026948] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:27.741 [2024-07-13 22:13:47.026958] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:27.741 [2024-07-13 22:13:47.027028] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.741 [2024-07-13 22:13:47.027049] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.741 [2024-07-13 22:13:47.027067] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.741 [2024-07-13 22:13:47.027097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:27.741 [2024-07-13 22:13:47.027137] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.741 [2024-07-13 22:13:47.034913] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.741 [2024-07-13 22:13:47.034940] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.741 [2024-07-13 22:13:47.034953] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.741 [2024-07-13 22:13:47.034966] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.741 [2024-07-13 22:13:47.034990] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:27.741 [2024-07-13 22:13:47.035034] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:30:27.741 [2024-07-13 22:13:47.035054] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:30:27.741 [2024-07-13 22:13:47.035086] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.741 [2024-07-13 22:13:47.035101] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.741 [2024-07-13 22:13:47.035113] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.741 [2024-07-13 22:13:47.035133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.741 [2024-07-13 22:13:47.035183] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.741 [2024-07-13 22:13:47.035453] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.741 [2024-07-13 22:13:47.035475] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.741 [2024-07-13 22:13:47.035488] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.741 [2024-07-13 22:13:47.035500] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.741 [2024-07-13 22:13:47.035521] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:30:27.741 [2024-07-13 22:13:47.035546] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:30:27.741 [2024-07-13 22:13:47.035583] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.741 [2024-07-13 22:13:47.035596] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.741 [2024-07-13 22:13:47.035607] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.741 [2024-07-13 22:13:47.035629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.741 [2024-07-13 22:13:47.035666] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.741 [2024-07-13 22:13:47.035882] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.741 [2024-07-13 22:13:47.035904] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.741 [2024-07-13 22:13:47.035920] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.741 [2024-07-13 22:13:47.035932] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.742 [2024-07-13 22:13:47.035947] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:30:27.742 [2024-07-13 22:13:47.035970] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:30:27.742 [2024-07-13 22:13:47.035995] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.742 [2024-07-13 22:13:47.036009] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.742 [2024-07-13 22:13:47.036020] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.742 [2024-07-13 22:13:47.036039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.742 [2024-07-13 22:13:47.036071] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.742 [2024-07-13 22:13:47.036223] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.742 [2024-07-13 22:13:47.036243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.742 [2024-07-13 22:13:47.036255] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.742 [2024-07-13 22:13:47.036265] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.742 [2024-07-13 22:13:47.036284] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:27.742 [2024-07-13 22:13:47.036312] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.742 [2024-07-13 22:13:47.036328] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.742 [2024-07-13 22:13:47.036339] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.742 [2024-07-13 22:13:47.036373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.742 [2024-07-13 22:13:47.036410] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.742 [2024-07-13 22:13:47.036610] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.742 [2024-07-13 22:13:47.036631] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.742 [2024-07-13 22:13:47.036642] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.742 [2024-07-13 22:13:47.036657] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.742 [2024-07-13 22:13:47.036672] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:30:27.742 [2024-07-13 22:13:47.036686] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:30:27.742 [2024-07-13 22:13:47.036707] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:27.742 [2024-07-13 22:13:47.036824] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:30:27.742 [2024-07-13 22:13:47.036837] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:27.742 [2024-07-13 22:13:47.036891] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.742 [2024-07-13 22:13:47.036912] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.742 [2024-07-13 22:13:47.036925] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.742 [2024-07-13 22:13:47.036944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.742 [2024-07-13 22:13:47.036977] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.742 [2024-07-13 22:13:47.037148] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.742 [2024-07-13 22:13:47.037182] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.742 [2024-07-13 22:13:47.037194] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.742 [2024-07-13 22:13:47.037205] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.742 [2024-07-13 22:13:47.037220] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:27.742 [2024-07-13 22:13:47.037252] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.742 [2024-07-13 22:13:47.037282] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.742 [2024-07-13 22:13:47.037293] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.742 [2024-07-13 22:13:47.037311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.742 [2024-07-13 22:13:47.037342] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.742 [2024-07-13 22:13:47.037541] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.742 [2024-07-13 22:13:47.037561] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.742 [2024-07-13 22:13:47.037572] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.742 [2024-07-13 22:13:47.037583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.742 [2024-07-13 22:13:47.037596] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:27.742 [2024-07-13 22:13:47.037625] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:30:27.742 [2024-07-13 22:13:47.037649] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:30:27.742 [2024-07-13 22:13:47.037688] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:30:27.742 [2024-07-13 22:13:47.037718] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.742 [2024-07-13 22:13:47.037732] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.742 [2024-07-13 22:13:47.037751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.742 [2024-07-13 22:13:47.037782] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.742 [2024-07-13 22:13:47.038051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:27.742 [2024-07-13 22:13:47.038073] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:27.742 [2024-07-13 22:13:47.038085] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:27.742 [2024-07-13 22:13:47.038097] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=0 00:30:27.742 [2024-07-13 22:13:47.038110] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:27.742 [2024-07-13 22:13:47.038122] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.742 [2024-07-13 22:13:47.038146] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:27.742 [2024-07-13 22:13:47.038164] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:27.742 [2024-07-13 22:13:47.038184] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.742 [2024-07-13 22:13:47.038201] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.742 [2024-07-13 22:13:47.038212] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.742 [2024-07-13 22:13:47.038222] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.742 [2024-07-13 22:13:47.038250] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:30:27.742 [2024-07-13 22:13:47.038266] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:30:27.742 [2024-07-13 22:13:47.038284] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:30:27.742 [2024-07-13 22:13:47.038296] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:30:27.742 [2024-07-13 22:13:47.038312] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:30:27.742 [2024-07-13 22:13:47.038326] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:30:27.742 [2024-07-13 22:13:47.038370] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:30:27.742 [2024-07-13 22:13:47.038394] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.742 [2024-07-13 22:13:47.038408] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.742 [2024-07-13 22:13:47.038419] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.742 [2024-07-13 22:13:47.038438] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:27.742 [2024-07-13 22:13:47.038474] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.742 [2024-07-13 22:13:47.038684] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.742 [2024-07-13 22:13:47.038706] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.742 [2024-07-13 22:13:47.038718] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.743 [2024-07-13 22:13:47.038728] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.743 [2024-07-13 22:13:47.038747] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.743 [2024-07-13 22:13:47.038760] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.743 [2024-07-13 22:13:47.038772] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000015700) 00:30:27.743 [2024-07-13 22:13:47.038810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:27.743 [2024-07-13 22:13:47.038833] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.743 [2024-07-13 22:13:47.038844] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.743 [2024-07-13 22:13:47.042894] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000015700) 00:30:27.743 [2024-07-13 22:13:47.042923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:27.743 [2024-07-13 22:13:47.042941] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.743 [2024-07-13 22:13:47.042952] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.743 [2024-07-13 22:13:47.042962] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000015700) 00:30:27.743 [2024-07-13 22:13:47.042978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:27.743 [2024-07-13 22:13:47.042993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.743 [2024-07-13 22:13:47.043008] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.743 [2024-07-13 22:13:47.043019] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.743 [2024-07-13 22:13:47.043034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:27.743 [2024-07-13 22:13:47.043048] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:27.743 [2024-07-13 22:13:47.043091] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:27.743 [2024-07-13 22:13:47.043117] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.743 [2024-07-13 22:13:47.043131] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:27.743 [2024-07-13 22:13:47.043150] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.743 [2024-07-13 22:13:47.043204] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:30:27.743 [2024-07-13 22:13:47.043222] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:30:27.743 [2024-07-13 22:13:47.043234] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:30:27.743 [2024-07-13 22:13:47.043261] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.743 [2024-07-13 22:13:47.043273] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:27.743 [2024-07-13 22:13:47.043535] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.743 [2024-07-13 22:13:47.043572] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.743 [2024-07-13 22:13:47.043583] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.743 [2024-07-13 22:13:47.043594] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:27.743 [2024-07-13 22:13:47.043609] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:30:27.743 [2024-07-13 22:13:47.043624] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:27.743 [2024-07-13 22:13:47.043645] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:30:27.743 [2024-07-13 22:13:47.043667] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:27.743 [2024-07-13 22:13:47.043685] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.743 [2024-07-13 22:13:47.043701] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.743 [2024-07-13 22:13:47.043713] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:27.743 [2024-07-13 22:13:47.043732] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:27.743 [2024-07-13 22:13:47.043763] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:27.743 [2024-07-13 22:13:47.043999] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.743 [2024-07-13 22:13:47.044022] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.743 [2024-07-13 22:13:47.044034] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.743 [2024-07-13 22:13:47.044045] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:27.743 [2024-07-13 22:13:47.044146] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:30:27.743 [2024-07-13 22:13:47.044197] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:27.743 [2024-07-13 22:13:47.044232] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.743 [2024-07-13 22:13:47.044247] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:27.743 [2024-07-13 22:13:47.044266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.743 [2024-07-13 22:13:47.044297] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:27.743 [2024-07-13 22:13:47.044533] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:27.743 [2024-07-13 22:13:47.044554] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:27.743 [2024-07-13 22:13:47.044566] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:27.743 [2024-07-13 22:13:47.044576] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:27.743 [2024-07-13 22:13:47.044588] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:27.743 [2024-07-13 22:13:47.044599] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.743 [2024-07-13 22:13:47.044632] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:27.743 [2024-07-13 22:13:47.044648] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:27.743 [2024-07-13 22:13:47.044754] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.743 [2024-07-13 22:13:47.044779] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.743 [2024-07-13 22:13:47.044793] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.743 [2024-07-13 22:13:47.044803] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:27.743 [2024-07-13 22:13:47.044843] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:30:27.743 [2024-07-13 22:13:47.044883] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:30:27.743 [2024-07-13 22:13:47.044921] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:30:27.743 [2024-07-13 22:13:47.044948] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.743 [2024-07-13 22:13:47.044962] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:27.743 [2024-07-13 22:13:47.044982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.743 [2024-07-13 22:13:47.045019] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:27.743 [2024-07-13 22:13:47.045214] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:27.743 [2024-07-13 22:13:47.045236] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:27.743 [2024-07-13 22:13:47.045248] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:27.743 [2024-07-13 22:13:47.045258] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:27.743 [2024-07-13 22:13:47.045270] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:27.743 [2024-07-13 22:13:47.045281] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.743 [2024-07-13 22:13:47.045314] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:27.743 [2024-07-13 22:13:47.045330] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:27.743 [2024-07-13 22:13:47.045390] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.743 [2024-07-13 22:13:47.045409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.743 [2024-07-13 22:13:47.045420] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.743 [2024-07-13 22:13:47.045435] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:27.743 [2024-07-13 22:13:47.045473] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:27.743 [2024-07-13 22:13:47.045511] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:27.743 [2024-07-13 22:13:47.045540] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.743 [2024-07-13 22:13:47.045555] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:27.743 [2024-07-13 22:13:47.045589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.743 [2024-07-13 22:13:47.045621] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:27.743 [2024-07-13 22:13:47.045927] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:27.743 [2024-07-13 22:13:47.045950] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:27.743 [2024-07-13 22:13:47.045961] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:27.744 [2024-07-13 22:13:47.045972] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=4 00:30:27.744 [2024-07-13 22:13:47.045983] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:27.744 [2024-07-13 22:13:47.045994] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.744 [2024-07-13 22:13:47.046011] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:27.744 [2024-07-13 22:13:47.046035] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:27.744 [2024-07-13 22:13:47.046058] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.744 [2024-07-13 22:13:47.046075] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.744 [2024-07-13 22:13:47.046086] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.744 [2024-07-13 22:13:47.046097] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:27.744 [2024-07-13 22:13:47.046123] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:27.744 [2024-07-13 22:13:47.046149] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:30:27.744 [2024-07-13 22:13:47.046178] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:30:27.744 [2024-07-13 22:13:47.046209] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:27.744 [2024-07-13 22:13:47.046224] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:27.744 [2024-07-13 22:13:47.046238] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:30:27.744 [2024-07-13 22:13:47.046256] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:30:27.744 [2024-07-13 22:13:47.046268] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:30:27.744 [2024-07-13 22:13:47.046293] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:30:27.744 [2024-07-13 22:13:47.046342] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.744 [2024-07-13 22:13:47.046359] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:27.744 [2024-07-13 22:13:47.046382] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.744 [2024-07-13 22:13:47.046410] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.744 [2024-07-13 22:13:47.046424] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.744 [2024-07-13 22:13:47.046435] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:27.744 [2024-07-13 22:13:47.046452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:27.744 [2024-07-13 22:13:47.046483] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:27.744 [2024-07-13 22:13:47.046522] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:27.744 [2024-07-13 22:13:47.046721] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.744 [2024-07-13 22:13:47.046746] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.744 [2024-07-13 22:13:47.046759] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.744 [2024-07-13 22:13:47.046771] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:27.744 [2024-07-13 22:13:47.046793] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.744 [2024-07-13 22:13:47.046824] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.744 [2024-07-13 22:13:47.046835] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.744 [2024-07-13 22:13:47.046845] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:27.744 [2024-07-13 22:13:47.050895] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.744 [2024-07-13 22:13:47.050916] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:27.744 [2024-07-13 22:13:47.050940] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.744 [2024-07-13 22:13:47.050973] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:27.744 [2024-07-13 22:13:47.051171] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.744 [2024-07-13 22:13:47.051196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.744 [2024-07-13 22:13:47.051209] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.744 [2024-07-13 22:13:47.051222] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:27.744 [2024-07-13 22:13:47.051249] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.744 [2024-07-13 22:13:47.051264] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:27.744 [2024-07-13 22:13:47.051298] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.744 [2024-07-13 22:13:47.051328] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:27.744 [2024-07-13 22:13:47.051537] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.744 [2024-07-13 22:13:47.051559] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.744 [2024-07-13 22:13:47.051570] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.744 [2024-07-13 22:13:47.051581] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:27.744 [2024-07-13 22:13:47.051611] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.744 [2024-07-13 22:13:47.051627] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:27.744 [2024-07-13 22:13:47.051644] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.744 [2024-07-13 22:13:47.051688] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:27.744 [2024-07-13 22:13:47.051916] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.744 [2024-07-13 22:13:47.051939] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.744 [2024-07-13 22:13:47.051951] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.744 [2024-07-13 22:13:47.051961] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:27.744 [2024-07-13 22:13:47.052008] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.744 [2024-07-13 22:13:47.052027] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000015700) 00:30:27.744 [2024-07-13 22:13:47.052046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.744 [2024-07-13 22:13:47.052068] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.744 [2024-07-13 22:13:47.052082] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000015700) 00:30:27.744 [2024-07-13 22:13:47.052099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.744 [2024-07-13 22:13:47.052120] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.744 [2024-07-13 22:13:47.052139] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000015700) 00:30:27.744 [2024-07-13 22:13:47.052173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.744 [2024-07-13 22:13:47.052197] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.744 [2024-07-13 22:13:47.052215] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:27.744 [2024-07-13 22:13:47.052233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.744 [2024-07-13 22:13:47.052280] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:30:27.744 [2024-07-13 22:13:47.052298] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:30:27.744 [2024-07-13 22:13:47.052323] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:30:27.744 [2024-07-13 22:13:47.052337] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:27.744 [2024-07-13 22:13:47.052679] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:27.744 [2024-07-13 22:13:47.052702] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:27.744 [2024-07-13 22:13:47.052713] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:27.745 [2024-07-13 22:13:47.052724] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=8192, cccid=5 00:30:27.745 [2024-07-13 22:13:47.052751] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x615000015700): expected_datao=0, payload_size=8192 00:30:27.745 [2024-07-13 22:13:47.052780] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.745 [2024-07-13 22:13:47.052841] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:27.745 [2024-07-13 22:13:47.052859] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:27.745 [2024-07-13 22:13:47.052890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:27.745 [2024-07-13 22:13:47.052908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:27.745 [2024-07-13 22:13:47.052920] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:27.745 [2024-07-13 22:13:47.052930] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=4 00:30:27.745 [2024-07-13 22:13:47.052942] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:27.745 [2024-07-13 22:13:47.052957] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.745 [2024-07-13 22:13:47.052980] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:27.745 [2024-07-13 22:13:47.052993] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:27.745 [2024-07-13 22:13:47.053007] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:27.745 [2024-07-13 22:13:47.053021] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:27.745 [2024-07-13 22:13:47.053032] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:27.745 [2024-07-13 22:13:47.053045] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=512, cccid=6 00:30:27.745 [2024-07-13 22:13:47.053058] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x615000015700): expected_datao=0, payload_size=512 00:30:27.745 [2024-07-13 22:13:47.053069] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.745 [2024-07-13 22:13:47.053088] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:27.745 [2024-07-13 22:13:47.053100] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:27.745 [2024-07-13 22:13:47.053114] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:27.745 [2024-07-13 22:13:47.053128] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:27.745 [2024-07-13 22:13:47.053138] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:27.745 [2024-07-13 22:13:47.053165] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000015700): datao=0, datal=4096, cccid=7 00:30:27.745 [2024-07-13 22:13:47.053176] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x615000015700): expected_datao=0, payload_size=4096 00:30:27.745 [2024-07-13 22:13:47.053186] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.745 [2024-07-13 22:13:47.053206] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:27.745 [2024-07-13 22:13:47.053218] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:27.745 [2024-07-13 22:13:47.053250] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.745 [2024-07-13 22:13:47.053266] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.745 [2024-07-13 22:13:47.053276] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.745 [2024-07-13 22:13:47.053287] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x615000015700 00:30:27.745 [2024-07-13 22:13:47.053320] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.745 [2024-07-13 22:13:47.053337] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.745 [2024-07-13 22:13:47.053347] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.745 [2024-07-13 22:13:47.053357] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x615000015700 00:30:27.745 [2024-07-13 22:13:47.053381] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.745 [2024-07-13 22:13:47.053399] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.745 [2024-07-13 22:13:47.053410] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.745 [2024-07-13 22:13:47.053420] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x615000015700 00:30:27.745 [2024-07-13 22:13:47.053437] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.745 [2024-07-13 22:13:47.053452] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.745 [2024-07-13 22:13:47.053462] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.745 [2024-07-13 22:13:47.053472] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:27.745 ===================================================== 00:30:27.745 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:27.745 ===================================================== 00:30:27.745 Controller Capabilities/Features 00:30:27.745 ================================ 00:30:27.745 Vendor ID: 8086 00:30:27.745 Subsystem Vendor ID: 8086 00:30:27.745 Serial Number: SPDK00000000000001 00:30:27.745 Model Number: SPDK bdev Controller 00:30:27.745 Firmware Version: 24.09 00:30:27.745 Recommended Arb Burst: 6 00:30:27.745 IEEE OUI Identifier: e4 d2 5c 00:30:27.745 Multi-path I/O 00:30:27.745 May have multiple subsystem ports: Yes 00:30:27.745 May have multiple controllers: Yes 00:30:27.745 Associated with SR-IOV VF: No 00:30:27.745 Max Data Transfer Size: 131072 00:30:27.745 Max Number of Namespaces: 32 00:30:27.745 Max Number of I/O Queues: 127 00:30:27.745 NVMe Specification Version (VS): 1.3 00:30:27.745 NVMe Specification Version (Identify): 1.3 00:30:27.745 Maximum Queue Entries: 128 00:30:27.745 Contiguous Queues Required: Yes 00:30:27.745 Arbitration Mechanisms Supported 00:30:27.745 Weighted Round Robin: Not Supported 00:30:27.745 Vendor Specific: Not Supported 00:30:27.745 Reset Timeout: 15000 ms 00:30:27.745 Doorbell Stride: 4 bytes 00:30:27.745 NVM Subsystem Reset: Not Supported 00:30:27.745 Command Sets Supported 00:30:27.745 NVM Command Set: Supported 00:30:27.745 Boot Partition: Not Supported 00:30:27.745 Memory Page Size Minimum: 4096 bytes 00:30:27.745 Memory Page Size Maximum: 4096 bytes 00:30:27.745 Persistent Memory Region: Not Supported 00:30:27.745 Optional Asynchronous Events Supported 00:30:27.745 Namespace Attribute Notices: Supported 00:30:27.745 Firmware Activation Notices: Not Supported 00:30:27.745 ANA Change Notices: Not Supported 00:30:27.745 PLE Aggregate Log Change Notices: Not Supported 00:30:27.745 LBA Status Info Alert Notices: Not Supported 00:30:27.745 EGE Aggregate Log Change Notices: Not Supported 00:30:27.745 Normal NVM Subsystem Shutdown event: Not Supported 00:30:27.745 Zone Descriptor Change Notices: Not Supported 00:30:27.745 Discovery Log Change Notices: Not Supported 00:30:27.745 Controller Attributes 00:30:27.745 128-bit Host Identifier: Supported 00:30:27.745 Non-Operational Permissive Mode: Not Supported 00:30:27.745 NVM Sets: Not Supported 00:30:27.745 Read Recovery Levels: Not Supported 00:30:27.745 Endurance Groups: Not Supported 00:30:27.745 Predictable Latency Mode: Not Supported 00:30:27.745 Traffic Based Keep ALive: Not Supported 00:30:27.745 Namespace Granularity: Not Supported 00:30:27.745 SQ Associations: Not Supported 00:30:27.745 UUID List: Not Supported 00:30:27.745 Multi-Domain Subsystem: Not Supported 00:30:27.745 Fixed Capacity Management: Not Supported 00:30:27.745 Variable Capacity Management: Not Supported 00:30:27.745 Delete Endurance Group: Not Supported 00:30:27.745 Delete NVM Set: Not Supported 00:30:27.745 Extended LBA Formats Supported: Not Supported 00:30:27.745 Flexible Data Placement Supported: Not Supported 00:30:27.745 00:30:27.745 Controller Memory Buffer Support 00:30:27.745 ================================ 00:30:27.745 Supported: No 00:30:27.745 00:30:27.745 Persistent Memory Region Support 00:30:27.745 ================================ 00:30:27.745 Supported: No 00:30:27.746 00:30:27.746 Admin Command Set Attributes 00:30:27.746 ============================ 00:30:27.746 Security Send/Receive: Not Supported 00:30:27.746 Format NVM: Not Supported 00:30:27.746 Firmware Activate/Download: Not Supported 00:30:27.746 Namespace Management: Not Supported 00:30:27.746 Device Self-Test: Not Supported 00:30:27.746 Directives: Not Supported 00:30:27.746 NVMe-MI: Not Supported 00:30:27.746 Virtualization Management: Not Supported 00:30:27.746 Doorbell Buffer Config: Not Supported 00:30:27.746 Get LBA Status Capability: Not Supported 00:30:27.746 Command & Feature Lockdown Capability: Not Supported 00:30:27.746 Abort Command Limit: 4 00:30:27.746 Async Event Request Limit: 4 00:30:27.746 Number of Firmware Slots: N/A 00:30:27.746 Firmware Slot 1 Read-Only: N/A 00:30:27.746 Firmware Activation Without Reset: N/A 00:30:27.746 Multiple Update Detection Support: N/A 00:30:27.746 Firmware Update Granularity: No Information Provided 00:30:27.746 Per-Namespace SMART Log: No 00:30:27.746 Asymmetric Namespace Access Log Page: Not Supported 00:30:27.746 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:27.746 Command Effects Log Page: Supported 00:30:27.746 Get Log Page Extended Data: Supported 00:30:27.746 Telemetry Log Pages: Not Supported 00:30:27.746 Persistent Event Log Pages: Not Supported 00:30:27.746 Supported Log Pages Log Page: May Support 00:30:27.746 Commands Supported & Effects Log Page: Not Supported 00:30:27.746 Feature Identifiers & Effects Log Page:May Support 00:30:27.746 NVMe-MI Commands & Effects Log Page: May Support 00:30:27.746 Data Area 4 for Telemetry Log: Not Supported 00:30:27.746 Error Log Page Entries Supported: 128 00:30:27.746 Keep Alive: Supported 00:30:27.746 Keep Alive Granularity: 10000 ms 00:30:27.746 00:30:27.746 NVM Command Set Attributes 00:30:27.746 ========================== 00:30:27.746 Submission Queue Entry Size 00:30:27.746 Max: 64 00:30:27.746 Min: 64 00:30:27.746 Completion Queue Entry Size 00:30:27.746 Max: 16 00:30:27.746 Min: 16 00:30:27.746 Number of Namespaces: 32 00:30:27.746 Compare Command: Supported 00:30:27.746 Write Uncorrectable Command: Not Supported 00:30:27.746 Dataset Management Command: Supported 00:30:27.746 Write Zeroes Command: Supported 00:30:27.746 Set Features Save Field: Not Supported 00:30:27.746 Reservations: Supported 00:30:27.746 Timestamp: Not Supported 00:30:27.746 Copy: Supported 00:30:27.746 Volatile Write Cache: Present 00:30:27.746 Atomic Write Unit (Normal): 1 00:30:27.746 Atomic Write Unit (PFail): 1 00:30:27.746 Atomic Compare & Write Unit: 1 00:30:27.746 Fused Compare & Write: Supported 00:30:27.746 Scatter-Gather List 00:30:27.746 SGL Command Set: Supported 00:30:27.746 SGL Keyed: Supported 00:30:27.746 SGL Bit Bucket Descriptor: Not Supported 00:30:27.746 SGL Metadata Pointer: Not Supported 00:30:27.746 Oversized SGL: Not Supported 00:30:27.746 SGL Metadata Address: Not Supported 00:30:27.746 SGL Offset: Supported 00:30:27.746 Transport SGL Data Block: Not Supported 00:30:27.746 Replay Protected Memory Block: Not Supported 00:30:27.746 00:30:27.746 Firmware Slot Information 00:30:27.746 ========================= 00:30:27.746 Active slot: 1 00:30:27.746 Slot 1 Firmware Revision: 24.09 00:30:27.746 00:30:27.746 00:30:27.746 Commands Supported and Effects 00:30:27.746 ============================== 00:30:27.746 Admin Commands 00:30:27.746 -------------- 00:30:27.746 Get Log Page (02h): Supported 00:30:27.746 Identify (06h): Supported 00:30:27.746 Abort (08h): Supported 00:30:27.746 Set Features (09h): Supported 00:30:27.746 Get Features (0Ah): Supported 00:30:27.746 Asynchronous Event Request (0Ch): Supported 00:30:27.746 Keep Alive (18h): Supported 00:30:27.746 I/O Commands 00:30:27.746 ------------ 00:30:27.746 Flush (00h): Supported LBA-Change 00:30:27.746 Write (01h): Supported LBA-Change 00:30:27.746 Read (02h): Supported 00:30:27.746 Compare (05h): Supported 00:30:27.746 Write Zeroes (08h): Supported LBA-Change 00:30:27.746 Dataset Management (09h): Supported LBA-Change 00:30:27.746 Copy (19h): Supported LBA-Change 00:30:27.746 00:30:27.746 Error Log 00:30:27.746 ========= 00:30:27.746 00:30:27.746 Arbitration 00:30:27.746 =========== 00:30:27.746 Arbitration Burst: 1 00:30:27.746 00:30:27.746 Power Management 00:30:27.746 ================ 00:30:27.746 Number of Power States: 1 00:30:27.746 Current Power State: Power State #0 00:30:27.746 Power State #0: 00:30:27.746 Max Power: 0.00 W 00:30:27.746 Non-Operational State: Operational 00:30:27.746 Entry Latency: Not Reported 00:30:27.746 Exit Latency: Not Reported 00:30:27.746 Relative Read Throughput: 0 00:30:27.746 Relative Read Latency: 0 00:30:27.746 Relative Write Throughput: 0 00:30:27.746 Relative Write Latency: 0 00:30:27.746 Idle Power: Not Reported 00:30:27.746 Active Power: Not Reported 00:30:27.746 Non-Operational Permissive Mode: Not Supported 00:30:27.746 00:30:27.746 Health Information 00:30:27.746 ================== 00:30:27.746 Critical Warnings: 00:30:27.746 Available Spare Space: OK 00:30:27.746 Temperature: OK 00:30:27.746 Device Reliability: OK 00:30:27.746 Read Only: No 00:30:27.746 Volatile Memory Backup: OK 00:30:27.746 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:27.746 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:27.746 Available Spare: 0% 00:30:27.746 Available Spare Threshold: 0% 00:30:27.746 Life Percentage Used:[2024-07-13 22:13:47.053689] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.746 [2024-07-13 22:13:47.053708] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000015700) 00:30:27.746 [2024-07-13 22:13:47.053727] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.746 [2024-07-13 22:13:47.053766] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:30:27.746 [2024-07-13 22:13:47.053999] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.746 [2024-07-13 22:13:47.054022] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.747 [2024-07-13 22:13:47.054035] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.747 [2024-07-13 22:13:47.054053] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x615000015700 00:30:27.747 [2024-07-13 22:13:47.054136] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:30:27.747 [2024-07-13 22:13:47.054182] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000015700 00:30:27.747 [2024-07-13 22:13:47.054203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.747 [2024-07-13 22:13:47.054217] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x615000015700 00:30:27.747 [2024-07-13 22:13:47.054235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.747 [2024-07-13 22:13:47.054249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x615000015700 00:30:27.747 [2024-07-13 22:13:47.054262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.747 [2024-07-13 22:13:47.054274] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.747 [2024-07-13 22:13:47.054287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.747 [2024-07-13 22:13:47.054306] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.747 [2024-07-13 22:13:47.054320] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.747 [2024-07-13 22:13:47.054331] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.747 [2024-07-13 22:13:47.054349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.747 [2024-07-13 22:13:47.054382] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.747 [2024-07-13 22:13:47.054590] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.747 [2024-07-13 22:13:47.054617] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.747 [2024-07-13 22:13:47.054630] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.747 [2024-07-13 22:13:47.054642] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.747 [2024-07-13 22:13:47.054662] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.747 [2024-07-13 22:13:47.054676] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.747 [2024-07-13 22:13:47.054687] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.747 [2024-07-13 22:13:47.054721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.747 [2024-07-13 22:13:47.054760] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.747 [2024-07-13 22:13:47.058907] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.747 [2024-07-13 22:13:47.058931] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.747 [2024-07-13 22:13:47.058943] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.747 [2024-07-13 22:13:47.058954] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.747 [2024-07-13 22:13:47.058968] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:30:27.747 [2024-07-13 22:13:47.058988] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:30:27.747 [2024-07-13 22:13:47.059032] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:27.747 [2024-07-13 22:13:47.059049] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:27.747 [2024-07-13 22:13:47.059073] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000015700) 00:30:27.747 [2024-07-13 22:13:47.059100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.747 [2024-07-13 22:13:47.059135] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:30:27.747 [2024-07-13 22:13:47.059322] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:27.747 [2024-07-13 22:13:47.059348] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:27.747 [2024-07-13 22:13:47.059361] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:27.747 [2024-07-13 22:13:47.059372] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x615000015700 00:30:27.747 [2024-07-13 22:13:47.059395] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:30:27.747 0% 00:30:27.747 Data Units Read: 0 00:30:27.747 Data Units Written: 0 00:30:27.747 Host Read Commands: 0 00:30:27.747 Host Write Commands: 0 00:30:27.747 Controller Busy Time: 0 minutes 00:30:27.747 Power Cycles: 0 00:30:27.747 Power On Hours: 0 hours 00:30:27.747 Unsafe Shutdowns: 0 00:30:27.747 Unrecoverable Media Errors: 0 00:30:27.747 Lifetime Error Log Entries: 0 00:30:27.747 Warning Temperature Time: 0 minutes 00:30:27.747 Critical Temperature Time: 0 minutes 00:30:27.747 00:30:27.747 Number of Queues 00:30:27.747 ================ 00:30:27.747 Number of I/O Submission Queues: 127 00:30:27.747 Number of I/O Completion Queues: 127 00:30:27.747 00:30:27.747 Active Namespaces 00:30:27.747 ================= 00:30:27.747 Namespace ID:1 00:30:27.747 Error Recovery Timeout: Unlimited 00:30:27.747 Command Set Identifier: NVM (00h) 00:30:27.747 Deallocate: Supported 00:30:27.747 Deallocated/Unwritten Error: Not Supported 00:30:27.747 Deallocated Read Value: Unknown 00:30:27.747 Deallocate in Write Zeroes: Not Supported 00:30:27.747 Deallocated Guard Field: 0xFFFF 00:30:27.747 Flush: Supported 00:30:27.747 Reservation: Supported 00:30:27.747 Namespace Sharing Capabilities: Multiple Controllers 00:30:27.747 Size (in LBAs): 131072 (0GiB) 00:30:27.747 Capacity (in LBAs): 131072 (0GiB) 00:30:27.747 Utilization (in LBAs): 131072 (0GiB) 00:30:27.748 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:27.748 EUI64: ABCDEF0123456789 00:30:27.748 UUID: b901a19b-bcd7-4b17-825c-4378e94822d8 00:30:27.748 Thin Provisioning: Not Supported 00:30:27.748 Per-NS Atomic Units: Yes 00:30:27.748 Atomic Boundary Size (Normal): 0 00:30:27.748 Atomic Boundary Size (PFail): 0 00:30:27.748 Atomic Boundary Offset: 0 00:30:27.748 Maximum Single Source Range Length: 65535 00:30:27.748 Maximum Copy Length: 65535 00:30:27.748 Maximum Source Range Count: 1 00:30:27.748 NGUID/EUI64 Never Reused: No 00:30:27.748 Namespace Write Protected: No 00:30:27.748 Number of LBA Formats: 1 00:30:27.748 Current LBA Format: LBA Format #00 00:30:27.748 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:27.748 00:30:27.748 22:13:47 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:27.748 22:13:47 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:27.748 22:13:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.748 22:13:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:27.748 22:13:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.748 22:13:47 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:27.748 22:13:47 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:27.748 22:13:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:27.748 22:13:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:30:28.006 22:13:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:28.006 22:13:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:30:28.006 22:13:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:28.006 22:13:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:28.006 rmmod nvme_tcp 00:30:28.006 rmmod nvme_fabrics 00:30:28.006 rmmod nvme_keyring 00:30:28.006 22:13:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:28.006 22:13:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:30:28.006 22:13:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:30:28.006 22:13:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 4178997 ']' 00:30:28.006 22:13:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 4178997 00:30:28.006 22:13:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 4178997 ']' 00:30:28.006 22:13:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 4178997 00:30:28.006 22:13:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:30:28.006 22:13:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:28.006 22:13:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4178997 00:30:28.006 22:13:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:28.006 22:13:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:28.006 22:13:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4178997' 00:30:28.006 killing process with pid 4178997 00:30:28.006 22:13:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 4178997 00:30:28.006 22:13:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 4178997 00:30:29.381 22:13:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:29.381 22:13:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:29.381 22:13:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:29.381 22:13:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:29.381 22:13:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:29.381 22:13:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.381 22:13:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:29.381 22:13:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.284 22:13:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:31.284 00:30:31.284 real 0m7.459s 00:30:31.284 user 0m10.323s 00:30:31.284 sys 0m2.077s 00:30:31.284 22:13:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:31.284 22:13:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:31.284 ************************************ 00:30:31.284 END TEST nvmf_identify 00:30:31.284 ************************************ 00:30:31.575 22:13:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:31.575 22:13:50 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:31.575 22:13:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:31.575 22:13:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:31.575 22:13:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:31.575 ************************************ 00:30:31.575 START TEST nvmf_perf 00:30:31.575 ************************************ 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:31.575 * Looking for test storage... 00:30:31.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:30:31.575 22:13:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:33.478 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:33.478 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:33.478 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:33.478 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:33.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:33.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:30:33.478 00:30:33.478 --- 10.0.0.2 ping statistics --- 00:30:33.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.478 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:33.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:33.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:30:33.478 00:30:33.478 --- 10.0.0.1 ping statistics --- 00:30:33.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.478 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:33.478 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:33.479 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:33.479 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:33.479 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:33.479 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:33.479 22:13:52 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:33.479 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:33.479 22:13:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:33.479 22:13:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:33.479 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=4181227 00:30:33.479 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:33.479 22:13:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 4181227 00:30:33.479 22:13:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 4181227 ']' 00:30:33.479 22:13:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:33.479 22:13:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:33.479 22:13:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:33.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:33.479 22:13:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:33.479 22:13:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:33.737 [2024-07-13 22:13:52.919184] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:33.737 [2024-07-13 22:13:52.919340] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:33.737 EAL: No free 2048 kB hugepages reported on node 1 00:30:33.737 [2024-07-13 22:13:53.052506] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:34.005 [2024-07-13 22:13:53.282538] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:34.005 [2024-07-13 22:13:53.282608] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:34.005 [2024-07-13 22:13:53.282631] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:34.005 [2024-07-13 22:13:53.282649] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:34.005 [2024-07-13 22:13:53.282677] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:34.005 [2024-07-13 22:13:53.282804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:34.005 [2024-07-13 22:13:53.282876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:34.005 [2024-07-13 22:13:53.282912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.005 [2024-07-13 22:13:53.282935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:34.577 22:13:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:34.577 22:13:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:30:34.577 22:13:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:34.577 22:13:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:34.577 22:13:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:34.577 22:13:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:34.577 22:13:53 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:34.577 22:13:53 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:37.856 22:13:57 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:37.856 22:13:57 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:38.115 22:13:57 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:30:38.115 22:13:57 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:38.373 22:13:57 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:38.373 22:13:57 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:30:38.373 22:13:57 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:38.373 22:13:57 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:38.373 22:13:57 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:38.631 [2024-07-13 22:13:57.820544] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:38.631 22:13:57 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:38.889 22:13:58 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:38.889 22:13:58 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:39.147 22:13:58 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:39.147 22:13:58 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:39.406 22:13:58 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:39.664 [2024-07-13 22:13:58.809931] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:39.664 22:13:58 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:39.922 22:13:59 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:30:39.922 22:13:59 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:39.922 22:13:59 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:39.922 22:13:59 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:41.296 Initializing NVMe Controllers 00:30:41.296 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:30:41.296 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:30:41.296 Initialization complete. Launching workers. 00:30:41.296 ======================================================== 00:30:41.296 Latency(us) 00:30:41.296 Device Information : IOPS MiB/s Average min max 00:30:41.296 PCIE (0000:88:00.0) NSID 1 from core 0: 72179.13 281.95 442.20 37.60 5337.90 00:30:41.296 ======================================================== 00:30:41.296 Total : 72179.13 281.95 442.20 37.60 5337.90 00:30:41.296 00:30:41.296 22:14:00 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:41.296 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.670 Initializing NVMe Controllers 00:30:42.670 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:42.670 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:42.670 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:42.670 Initialization complete. Launching workers. 00:30:42.670 ======================================================== 00:30:42.670 Latency(us) 00:30:42.670 Device Information : IOPS MiB/s Average min max 00:30:42.670 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 92.00 0.36 11102.26 256.49 46175.48 00:30:42.670 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 61.00 0.24 16482.44 7915.53 47952.01 00:30:42.670 ======================================================== 00:30:42.670 Total : 153.00 0.60 13247.30 256.49 47952.01 00:30:42.670 00:30:42.670 22:14:01 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:42.670 EAL: No free 2048 kB hugepages reported on node 1 00:30:44.042 Initializing NVMe Controllers 00:30:44.042 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:44.042 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:44.042 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:44.042 Initialization complete. Launching workers. 00:30:44.042 ======================================================== 00:30:44.042 Latency(us) 00:30:44.042 Device Information : IOPS MiB/s Average min max 00:30:44.042 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5533.72 21.62 5796.86 845.01 8989.82 00:30:44.042 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3887.81 15.19 8272.66 6439.96 17620.53 00:30:44.042 ======================================================== 00:30:44.042 Total : 9421.53 36.80 6818.50 845.01 17620.53 00:30:44.042 00:30:44.042 22:14:03 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:44.042 22:14:03 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:44.042 22:14:03 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:44.299 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.827 Initializing NVMe Controllers 00:30:46.827 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:46.827 Controller IO queue size 128, less than required. 00:30:46.827 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:46.827 Controller IO queue size 128, less than required. 00:30:46.827 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:46.827 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:46.827 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:46.827 Initialization complete. Launching workers. 00:30:46.827 ======================================================== 00:30:46.827 Latency(us) 00:30:46.827 Device Information : IOPS MiB/s Average min max 00:30:46.827 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 876.42 219.11 154490.86 100936.10 327772.43 00:30:46.827 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 578.46 144.61 233594.97 104050.37 465333.52 00:30:46.827 ======================================================== 00:30:46.827 Total : 1454.88 363.72 185942.54 100936.10 465333.52 00:30:46.827 00:30:47.086 22:14:06 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:47.086 EAL: No free 2048 kB hugepages reported on node 1 00:30:47.344 No valid NVMe controllers or AIO or URING devices found 00:30:47.344 Initializing NVMe Controllers 00:30:47.344 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:47.344 Controller IO queue size 128, less than required. 00:30:47.344 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:47.344 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:47.344 Controller IO queue size 128, less than required. 00:30:47.344 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:47.344 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:47.344 WARNING: Some requested NVMe devices were skipped 00:30:47.344 22:14:06 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:47.602 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.884 Initializing NVMe Controllers 00:30:50.884 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:50.884 Controller IO queue size 128, less than required. 00:30:50.884 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:50.884 Controller IO queue size 128, less than required. 00:30:50.884 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:50.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:50.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:50.884 Initialization complete. Launching workers. 00:30:50.884 00:30:50.885 ==================== 00:30:50.885 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:50.885 TCP transport: 00:30:50.885 polls: 17131 00:30:50.885 idle_polls: 4842 00:30:50.885 sock_completions: 12289 00:30:50.885 nvme_completions: 3607 00:30:50.885 submitted_requests: 5374 00:30:50.885 queued_requests: 1 00:30:50.885 00:30:50.885 ==================== 00:30:50.885 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:50.885 TCP transport: 00:30:50.885 polls: 18603 00:30:50.885 idle_polls: 7524 00:30:50.885 sock_completions: 11079 00:30:50.885 nvme_completions: 3587 00:30:50.885 submitted_requests: 5370 00:30:50.885 queued_requests: 1 00:30:50.885 ======================================================== 00:30:50.885 Latency(us) 00:30:50.885 Device Information : IOPS MiB/s Average min max 00:30:50.885 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 901.47 225.37 155057.28 88381.70 425660.37 00:30:50.885 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 896.47 224.12 142606.78 86955.13 287697.80 00:30:50.885 ======================================================== 00:30:50.885 Total : 1797.94 449.48 148849.35 86955.13 425660.37 00:30:50.885 00:30:50.885 22:14:09 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:50.885 22:14:09 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:50.885 22:14:10 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:50.885 22:14:10 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:30:50.885 22:14:10 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:54.175 22:14:13 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=4fcdf725-9635-47b4-9fc1-729de20075bb 00:30:54.175 22:14:13 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 4fcdf725-9635-47b4-9fc1-729de20075bb 00:30:54.175 22:14:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=4fcdf725-9635-47b4-9fc1-729de20075bb 00:30:54.175 22:14:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:54.175 22:14:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:30:54.175 22:14:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:30:54.175 22:14:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:54.432 22:14:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:54.432 { 00:30:54.432 "uuid": "4fcdf725-9635-47b4-9fc1-729de20075bb", 00:30:54.432 "name": "lvs_0", 00:30:54.432 "base_bdev": "Nvme0n1", 00:30:54.432 "total_data_clusters": 238234, 00:30:54.432 "free_clusters": 238234, 00:30:54.432 "block_size": 512, 00:30:54.432 "cluster_size": 4194304 00:30:54.432 } 00:30:54.432 ]' 00:30:54.432 22:14:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="4fcdf725-9635-47b4-9fc1-729de20075bb") .free_clusters' 00:30:54.432 22:14:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:30:54.432 22:14:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="4fcdf725-9635-47b4-9fc1-729de20075bb") .cluster_size' 00:30:54.432 22:14:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:54.432 22:14:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:30:54.432 22:14:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:30:54.432 952936 00:30:54.432 22:14:13 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:54.432 22:14:13 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:54.432 22:14:13 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4fcdf725-9635-47b4-9fc1-729de20075bb lbd_0 20480 00:30:54.997 22:14:14 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=653abe57-279e-4919-a941-a20a7b668982 00:30:54.997 22:14:14 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 653abe57-279e-4919-a941-a20a7b668982 lvs_n_0 00:30:55.562 22:14:14 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=05b66b51-b025-40a9-8613-1573fe4d5d0c 00:30:55.562 22:14:14 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 05b66b51-b025-40a9-8613-1573fe4d5d0c 00:30:55.562 22:14:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=05b66b51-b025-40a9-8613-1573fe4d5d0c 00:30:55.562 22:14:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:55.562 22:14:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:30:55.562 22:14:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:30:55.562 22:14:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:55.819 22:14:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:55.819 { 00:30:55.819 "uuid": "4fcdf725-9635-47b4-9fc1-729de20075bb", 00:30:55.819 "name": "lvs_0", 00:30:55.819 "base_bdev": "Nvme0n1", 00:30:55.819 "total_data_clusters": 238234, 00:30:55.819 "free_clusters": 233114, 00:30:55.819 "block_size": 512, 00:30:55.819 "cluster_size": 4194304 00:30:55.819 }, 00:30:55.819 { 00:30:55.819 "uuid": "05b66b51-b025-40a9-8613-1573fe4d5d0c", 00:30:55.819 "name": "lvs_n_0", 00:30:55.819 "base_bdev": "653abe57-279e-4919-a941-a20a7b668982", 00:30:55.819 "total_data_clusters": 5114, 00:30:55.819 "free_clusters": 5114, 00:30:55.819 "block_size": 512, 00:30:55.819 "cluster_size": 4194304 00:30:55.819 } 00:30:55.819 ]' 00:30:55.819 22:14:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="05b66b51-b025-40a9-8613-1573fe4d5d0c") .free_clusters' 00:30:55.819 22:14:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:30:55.819 22:14:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="05b66b51-b025-40a9-8613-1573fe4d5d0c") .cluster_size' 00:30:56.077 22:14:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:56.077 22:14:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:30:56.077 22:14:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:30:56.077 20456 00:30:56.077 22:14:15 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:56.077 22:14:15 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 05b66b51-b025-40a9-8613-1573fe4d5d0c lbd_nest_0 20456 00:30:56.335 22:14:15 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=af35019e-812d-4ee2-8cc9-4aa1f0f93fcc 00:30:56.335 22:14:15 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:56.592 22:14:15 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:56.592 22:14:15 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 af35019e-812d-4ee2-8cc9-4aa1f0f93fcc 00:30:56.850 22:14:16 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:57.108 22:14:16 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:57.108 22:14:16 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:57.108 22:14:16 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:57.108 22:14:16 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:57.108 22:14:16 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:57.108 EAL: No free 2048 kB hugepages reported on node 1 00:31:09.304 Initializing NVMe Controllers 00:31:09.304 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:09.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:09.304 Initialization complete. Launching workers. 00:31:09.304 ======================================================== 00:31:09.304 Latency(us) 00:31:09.304 Device Information : IOPS MiB/s Average min max 00:31:09.304 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.20 0.02 21248.34 299.43 47864.76 00:31:09.304 ======================================================== 00:31:09.304 Total : 47.20 0.02 21248.34 299.43 47864.76 00:31:09.304 00:31:09.304 22:14:26 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:09.304 22:14:26 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:09.304 EAL: No free 2048 kB hugepages reported on node 1 00:31:19.269 Initializing NVMe Controllers 00:31:19.269 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:19.269 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:19.269 Initialization complete. Launching workers. 00:31:19.269 ======================================================== 00:31:19.269 Latency(us) 00:31:19.269 Device Information : IOPS MiB/s Average min max 00:31:19.269 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 82.20 10.27 12172.35 6013.45 47891.16 00:31:19.269 ======================================================== 00:31:19.270 Total : 82.20 10.27 12172.35 6013.45 47891.16 00:31:19.270 00:31:19.270 22:14:37 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:19.270 22:14:37 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:19.270 22:14:37 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:19.270 EAL: No free 2048 kB hugepages reported on node 1 00:31:29.277 Initializing NVMe Controllers 00:31:29.277 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:29.277 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:29.277 Initialization complete. Launching workers. 00:31:29.277 ======================================================== 00:31:29.277 Latency(us) 00:31:29.277 Device Information : IOPS MiB/s Average min max 00:31:29.277 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4600.90 2.25 6960.11 465.69 43904.30 00:31:29.277 ======================================================== 00:31:29.277 Total : 4600.90 2.25 6960.11 465.69 43904.30 00:31:29.277 00:31:29.277 22:14:47 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:29.277 22:14:47 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:29.277 EAL: No free 2048 kB hugepages reported on node 1 00:31:39.246 Initializing NVMe Controllers 00:31:39.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:39.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:39.246 Initialization complete. Launching workers. 00:31:39.246 ======================================================== 00:31:39.246 Latency(us) 00:31:39.246 Device Information : IOPS MiB/s Average min max 00:31:39.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1806.42 225.80 17723.74 1620.80 37525.66 00:31:39.246 ======================================================== 00:31:39.246 Total : 1806.42 225.80 17723.74 1620.80 37525.66 00:31:39.246 00:31:39.246 22:14:58 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:39.246 22:14:58 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:39.246 22:14:58 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:39.246 EAL: No free 2048 kB hugepages reported on node 1 00:31:51.444 Initializing NVMe Controllers 00:31:51.444 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:51.444 Controller IO queue size 128, less than required. 00:31:51.444 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:51.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:51.444 Initialization complete. Launching workers. 00:31:51.444 ======================================================== 00:31:51.444 Latency(us) 00:31:51.444 Device Information : IOPS MiB/s Average min max 00:31:51.444 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8497.91 4.15 15067.89 2063.33 33028.22 00:31:51.444 ======================================================== 00:31:51.444 Total : 8497.91 4.15 15067.89 2063.33 33028.22 00:31:51.444 00:31:51.444 22:15:08 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:51.444 22:15:08 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:51.444 EAL: No free 2048 kB hugepages reported on node 1 00:32:01.414 Initializing NVMe Controllers 00:32:01.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:01.414 Controller IO queue size 128, less than required. 00:32:01.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:01.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:01.414 Initialization complete. Launching workers. 00:32:01.414 ======================================================== 00:32:01.414 Latency(us) 00:32:01.414 Device Information : IOPS MiB/s Average min max 00:32:01.414 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1165.48 145.68 110628.49 23073.51 248025.88 00:32:01.414 ======================================================== 00:32:01.414 Total : 1165.48 145.68 110628.49 23073.51 248025.88 00:32:01.414 00:32:01.414 22:15:19 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:01.414 22:15:19 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete af35019e-812d-4ee2-8cc9-4aa1f0f93fcc 00:32:01.414 22:15:20 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:01.414 22:15:20 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 653abe57-279e-4919-a941-a20a7b668982 00:32:01.683 22:15:20 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:01.943 22:15:21 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:32:01.944 22:15:21 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:32:01.944 22:15:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:01.944 22:15:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:32:01.944 22:15:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:01.944 22:15:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:32:01.944 22:15:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:01.944 22:15:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:01.944 rmmod nvme_tcp 00:32:01.944 rmmod nvme_fabrics 00:32:01.944 rmmod nvme_keyring 00:32:01.944 22:15:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:01.944 22:15:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:32:01.944 22:15:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:32:01.944 22:15:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 4181227 ']' 00:32:01.944 22:15:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 4181227 00:32:01.944 22:15:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 4181227 ']' 00:32:01.944 22:15:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 4181227 00:32:01.944 22:15:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:32:01.944 22:15:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:01.944 22:15:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4181227 00:32:02.201 22:15:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:02.201 22:15:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:02.201 22:15:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4181227' 00:32:02.201 killing process with pid 4181227 00:32:02.201 22:15:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 4181227 00:32:02.201 22:15:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 4181227 00:32:04.763 22:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:04.763 22:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:04.763 22:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:04.763 22:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:04.763 22:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:04.763 22:15:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.763 22:15:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:04.763 22:15:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.667 22:15:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:06.667 00:32:06.667 real 1m35.222s 00:32:06.667 user 5m49.682s 00:32:06.667 sys 0m16.368s 00:32:06.667 22:15:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:06.667 22:15:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:32:06.667 ************************************ 00:32:06.667 END TEST nvmf_perf 00:32:06.667 ************************************ 00:32:06.667 22:15:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:06.667 22:15:25 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:06.667 22:15:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:06.667 22:15:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:06.667 22:15:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:06.667 ************************************ 00:32:06.667 START TEST nvmf_fio_host 00:32:06.667 ************************************ 00:32:06.667 22:15:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:32:06.667 * Looking for test storage... 00:32:06.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:06.667 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:06.926 22:15:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:06.926 22:15:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:32:06.926 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:06.926 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:06.926 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:06.926 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:06.926 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:06.926 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:06.926 22:15:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:06.926 22:15:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.926 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:06.926 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:06.926 22:15:26 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:06.926 22:15:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:08.826 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:08.826 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.826 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:08.827 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:08.827 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:08.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:08.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:32:08.827 00:32:08.827 --- 10.0.0.2 ping statistics --- 00:32:08.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.827 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:08.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:08.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:32:08.827 00:32:08.827 --- 10.0.0.1 ping statistics --- 00:32:08.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.827 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:08.827 22:15:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:08.827 22:15:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:32:08.827 22:15:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:32:08.827 22:15:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:08.827 22:15:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.827 22:15:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=330 00:32:08.827 22:15:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:32:08.827 22:15:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:08.827 22:15:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 330 00:32:08.827 22:15:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 330 ']' 00:32:08.827 22:15:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:08.827 22:15:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:08.827 22:15:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:08.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:08.827 22:15:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:08.827 22:15:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.827 [2024-07-13 22:15:28.101356] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:08.827 [2024-07-13 22:15:28.101485] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:08.827 EAL: No free 2048 kB hugepages reported on node 1 00:32:09.084 [2024-07-13 22:15:28.242375] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:09.341 [2024-07-13 22:15:28.504963] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:09.341 [2024-07-13 22:15:28.505029] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:09.341 [2024-07-13 22:15:28.505064] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:09.341 [2024-07-13 22:15:28.505085] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:09.341 [2024-07-13 22:15:28.505109] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:09.341 [2024-07-13 22:15:28.505238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.341 [2024-07-13 22:15:28.505319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:09.341 [2024-07-13 22:15:28.505426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.341 [2024-07-13 22:15:28.505435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:09.905 22:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:09.905 22:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:32:09.905 22:15:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:09.905 [2024-07-13 22:15:29.256833] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:09.905 22:15:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:32:09.905 22:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:09.905 22:15:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.162 22:15:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:32:10.420 Malloc1 00:32:10.420 22:15:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:10.678 22:15:29 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:10.936 22:15:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:11.193 [2024-07-13 22:15:30.348629] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:11.193 22:15:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:11.460 22:15:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:11.460 22:15:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:11.460 22:15:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:11.460 22:15:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:11.460 22:15:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:11.460 22:15:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:11.460 22:15:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:11.460 22:15:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:11.460 22:15:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:11.460 22:15:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:11.460 22:15:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:11.460 22:15:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:11.460 22:15:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:11.460 22:15:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:11.460 22:15:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:11.460 22:15:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:11.460 22:15:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:11.460 22:15:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:11.719 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:11.719 fio-3.35 00:32:11.719 Starting 1 thread 00:32:11.719 EAL: No free 2048 kB hugepages reported on node 1 00:32:14.248 00:32:14.248 test: (groupid=0, jobs=1): err= 0: pid=881: Sat Jul 13 22:15:33 2024 00:32:14.248 read: IOPS=6548, BW=25.6MiB/s (26.8MB/s)(51.4MiB/2009msec) 00:32:14.248 slat (usec): min=2, max=165, avg= 3.62, stdev= 2.38 00:32:14.248 clat (usec): min=3480, max=17771, avg=10758.50, stdev=818.90 00:32:14.248 lat (usec): min=3523, max=17775, avg=10762.12, stdev=818.80 00:32:14.248 clat percentiles (usec): 00:32:14.248 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:32:14.248 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:32:14.248 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11731], 95.00th=[11994], 00:32:14.248 | 99.00th=[12518], 99.50th=[12780], 99.90th=[16450], 99.95th=[17433], 00:32:14.248 | 99.99th=[17695] 00:32:14.248 bw ( KiB/s): min=24808, max=27048, per=99.92%, avg=26170.00, stdev=983.93, samples=4 00:32:14.248 iops : min= 6202, max= 6762, avg=6542.50, stdev=245.98, samples=4 00:32:14.248 write: IOPS=6559, BW=25.6MiB/s (26.9MB/s)(51.5MiB/2009msec); 0 zone resets 00:32:14.248 slat (usec): min=2, max=141, avg= 3.87, stdev= 1.89 00:32:14.248 clat (usec): min=1844, max=16349, avg=8671.30, stdev=723.16 00:32:14.248 lat (usec): min=1863, max=16352, avg=8675.16, stdev=723.15 00:32:14.248 clat percentiles (usec): 00:32:14.248 | 1.00th=[ 6980], 5.00th=[ 7635], 10.00th=[ 7898], 20.00th=[ 8160], 00:32:14.248 | 30.00th=[ 8356], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8848], 00:32:14.248 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9503], 95.00th=[ 9634], 00:32:14.248 | 99.00th=[10159], 99.50th=[10421], 99.90th=[14091], 99.95th=[15270], 00:32:14.248 | 99.99th=[16319] 00:32:14.248 bw ( KiB/s): min=26088, max=26432, per=100.00%, avg=26242.00, stdev=176.29, samples=4 00:32:14.248 iops : min= 6522, max= 6608, avg=6560.50, stdev=44.07, samples=4 00:32:14.248 lat (msec) : 2=0.01%, 4=0.08%, 10=56.56%, 20=43.36% 00:32:14.248 cpu : usr=64.24%, sys=30.88%, ctx=56, majf=0, minf=1537 00:32:14.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:14.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:14.248 issued rwts: total=13155,13178,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:14.248 00:32:14.248 Run status group 0 (all jobs): 00:32:14.248 READ: bw=25.6MiB/s (26.8MB/s), 25.6MiB/s-25.6MiB/s (26.8MB/s-26.8MB/s), io=51.4MiB (53.9MB), run=2009-2009msec 00:32:14.248 WRITE: bw=25.6MiB/s (26.9MB/s), 25.6MiB/s-25.6MiB/s (26.9MB/s-26.9MB/s), io=51.5MiB (54.0MB), run=2009-2009msec 00:32:14.248 ----------------------------------------------------- 00:32:14.248 Suppressions used: 00:32:14.248 count bytes template 00:32:14.248 1 57 /usr/src/fio/parse.c 00:32:14.248 1 8 libtcmalloc_minimal.so 00:32:14.248 ----------------------------------------------------- 00:32:14.248 00:32:14.248 22:15:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:14.248 22:15:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:14.248 22:15:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:14.248 22:15:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:14.248 22:15:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:14.248 22:15:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:14.248 22:15:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:14.248 22:15:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:14.248 22:15:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:14.248 22:15:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:14.248 22:15:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:14.248 22:15:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:14.248 22:15:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:14.248 22:15:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:14.248 22:15:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:14.248 22:15:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:14.248 22:15:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:14.506 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:14.506 fio-3.35 00:32:14.506 Starting 1 thread 00:32:14.506 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.036 00:32:17.036 test: (groupid=0, jobs=1): err= 0: pid=1347: Sat Jul 13 22:15:36 2024 00:32:17.036 read: IOPS=6125, BW=95.7MiB/s (100MB/s)(192MiB/2011msec) 00:32:17.036 slat (usec): min=3, max=156, avg= 5.06, stdev= 2.29 00:32:17.036 clat (usec): min=3681, max=23287, avg=12510.64, stdev=3300.25 00:32:17.036 lat (usec): min=3686, max=23292, avg=12515.70, stdev=3300.35 00:32:17.036 clat percentiles (usec): 00:32:17.036 | 1.00th=[ 6128], 5.00th=[ 7701], 10.00th=[ 8586], 20.00th=[ 9765], 00:32:17.036 | 30.00th=[10552], 40.00th=[11338], 50.00th=[12256], 60.00th=[13042], 00:32:17.036 | 70.00th=[13960], 80.00th=[15008], 90.00th=[16712], 95.00th=[19006], 00:32:17.036 | 99.00th=[21890], 99.50th=[22414], 99.90th=[23200], 99.95th=[23200], 00:32:17.036 | 99.99th=[23200] 00:32:17.036 bw ( KiB/s): min=39232, max=57696, per=49.56%, avg=48576.00, stdev=9301.27, samples=4 00:32:17.036 iops : min= 2452, max= 3606, avg=3036.00, stdev=581.33, samples=4 00:32:17.036 write: IOPS=3583, BW=56.0MiB/s (58.7MB/s)(99.7MiB/1781msec); 0 zone resets 00:32:17.036 slat (usec): min=33, max=170, avg=35.93, stdev= 5.31 00:32:17.036 clat (usec): min=5551, max=27336, avg=14908.87, stdev=2739.05 00:32:17.036 lat (usec): min=5587, max=27373, avg=14944.80, stdev=2738.98 00:32:17.036 clat percentiles (usec): 00:32:17.036 | 1.00th=[ 9634], 5.00th=[10814], 10.00th=[11731], 20.00th=[12649], 00:32:17.036 | 30.00th=[13173], 40.00th=[13960], 50.00th=[14615], 60.00th=[15270], 00:32:17.036 | 70.00th=[16057], 80.00th=[17171], 90.00th=[18744], 95.00th=[20055], 00:32:17.036 | 99.00th=[21627], 99.50th=[22414], 99.90th=[22938], 99.95th=[23200], 00:32:17.036 | 99.99th=[27395] 00:32:17.036 bw ( KiB/s): min=41024, max=60448, per=88.06%, avg=50496.00, stdev=9311.47, samples=4 00:32:17.036 iops : min= 2564, max= 3778, avg=3156.00, stdev=581.97, samples=4 00:32:17.036 lat (msec) : 4=0.03%, 10=16.21%, 20=79.84%, 50=3.92% 00:32:17.036 cpu : usr=72.45%, sys=22.82%, ctx=39, majf=0, minf=2073 00:32:17.036 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:32:17.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.036 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:17.036 issued rwts: total=12318,6383,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.036 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:17.036 00:32:17.036 Run status group 0 (all jobs): 00:32:17.036 READ: bw=95.7MiB/s (100MB/s), 95.7MiB/s-95.7MiB/s (100MB/s-100MB/s), io=192MiB (202MB), run=2011-2011msec 00:32:17.036 WRITE: bw=56.0MiB/s (58.7MB/s), 56.0MiB/s-56.0MiB/s (58.7MB/s-58.7MB/s), io=99.7MiB (105MB), run=1781-1781msec 00:32:17.036 ----------------------------------------------------- 00:32:17.036 Suppressions used: 00:32:17.036 count bytes template 00:32:17.036 1 57 /usr/src/fio/parse.c 00:32:17.036 193 18528 /usr/src/fio/iolog.c 00:32:17.036 1 8 libtcmalloc_minimal.so 00:32:17.036 ----------------------------------------------------- 00:32:17.036 00:32:17.036 22:15:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:17.294 22:15:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:32:17.294 22:15:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:32:17.294 22:15:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:32:17.294 22:15:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:32:17.294 22:15:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:32:17.294 22:15:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:17.294 22:15:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:17.294 22:15:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:32:17.557 22:15:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:32:17.557 22:15:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:32:17.557 22:15:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:32:20.837 Nvme0n1 00:32:20.837 22:15:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:23.395 22:15:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=901b0554-d09d-46db-a98d-7cfc0acb515f 00:32:23.395 22:15:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 901b0554-d09d-46db-a98d-7cfc0acb515f 00:32:23.395 22:15:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=901b0554-d09d-46db-a98d-7cfc0acb515f 00:32:23.395 22:15:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:23.395 22:15:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:32:23.395 22:15:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:32:23.395 22:15:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:23.653 22:15:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:23.653 { 00:32:23.653 "uuid": "901b0554-d09d-46db-a98d-7cfc0acb515f", 00:32:23.653 "name": "lvs_0", 00:32:23.653 "base_bdev": "Nvme0n1", 00:32:23.653 "total_data_clusters": 930, 00:32:23.653 "free_clusters": 930, 00:32:23.653 "block_size": 512, 00:32:23.653 "cluster_size": 1073741824 00:32:23.653 } 00:32:23.653 ]' 00:32:23.653 22:15:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="901b0554-d09d-46db-a98d-7cfc0acb515f") .free_clusters' 00:32:23.911 22:15:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:32:23.911 22:15:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="901b0554-d09d-46db-a98d-7cfc0acb515f") .cluster_size' 00:32:23.911 22:15:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:32:23.911 22:15:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:32:23.911 22:15:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:32:23.911 952320 00:32:23.911 22:15:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:32:24.167 5f8fb1f9-975b-4969-9a5c-d1a59cacaf88 00:32:24.167 22:15:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:24.424 22:15:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:24.682 22:15:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:24.940 22:15:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:24.940 22:15:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:24.940 22:15:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:24.940 22:15:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:24.940 22:15:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:24.940 22:15:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:24.940 22:15:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:24.940 22:15:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:24.940 22:15:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:24.940 22:15:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:24.940 22:15:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:24.940 22:15:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:24.940 22:15:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:24.940 22:15:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:24.940 22:15:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:24.940 22:15:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:24.940 22:15:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:25.198 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:25.198 fio-3.35 00:32:25.198 Starting 1 thread 00:32:25.198 EAL: No free 2048 kB hugepages reported on node 1 00:32:27.725 00:32:27.725 test: (groupid=0, jobs=1): err= 0: pid=2774: Sat Jul 13 22:15:46 2024 00:32:27.725 read: IOPS=4515, BW=17.6MiB/s (18.5MB/s)(35.5MiB/2011msec) 00:32:27.725 slat (usec): min=2, max=152, avg= 3.67, stdev= 2.32 00:32:27.725 clat (usec): min=1391, max=173496, avg=15538.02, stdev=13098.82 00:32:27.725 lat (usec): min=1395, max=173558, avg=15541.70, stdev=13099.20 00:32:27.725 clat percentiles (msec): 00:32:27.725 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 13], 20.00th=[ 14], 00:32:27.725 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 15], 00:32:27.725 | 70.00th=[ 16], 80.00th=[ 16], 90.00th=[ 16], 95.00th=[ 17], 00:32:27.725 | 99.00th=[ 20], 99.50th=[ 157], 99.90th=[ 174], 99.95th=[ 174], 00:32:27.725 | 99.99th=[ 174] 00:32:27.725 bw ( KiB/s): min=12552, max=19936, per=99.91%, avg=18046.00, stdev=3663.19, samples=4 00:32:27.725 iops : min= 3138, max= 4984, avg=4511.50, stdev=915.80, samples=4 00:32:27.725 write: IOPS=4524, BW=17.7MiB/s (18.5MB/s)(35.5MiB/2011msec); 0 zone resets 00:32:27.725 slat (usec): min=3, max=109, avg= 3.89, stdev= 1.67 00:32:27.725 clat (usec): min=492, max=170417, avg=12506.32, stdev=12299.45 00:32:27.725 lat (usec): min=496, max=170425, avg=12510.22, stdev=12299.81 00:32:27.725 clat percentiles (msec): 00:32:27.725 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 11], 00:32:27.725 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:32:27.725 | 70.00th=[ 12], 80.00th=[ 13], 90.00th=[ 13], 95.00th=[ 14], 00:32:27.725 | 99.00th=[ 17], 99.50th=[ 159], 99.90th=[ 171], 99.95th=[ 171], 00:32:27.725 | 99.99th=[ 171] 00:32:27.725 bw ( KiB/s): min=13224, max=19968, per=99.79%, avg=18058.00, stdev=3230.00, samples=4 00:32:27.726 iops : min= 3306, max= 4992, avg=4514.50, stdev=807.50, samples=4 00:32:27.726 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:32:27.726 lat (msec) : 2=0.02%, 4=0.10%, 10=3.22%, 20=95.80%, 50=0.14% 00:32:27.726 lat (msec) : 250=0.70% 00:32:27.726 cpu : usr=61.59%, sys=34.68%, ctx=99, majf=0, minf=1535 00:32:27.726 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:32:27.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.726 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:27.726 issued rwts: total=9081,9098,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.726 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:27.726 00:32:27.726 Run status group 0 (all jobs): 00:32:27.726 READ: bw=17.6MiB/s (18.5MB/s), 17.6MiB/s-17.6MiB/s (18.5MB/s-18.5MB/s), io=35.5MiB (37.2MB), run=2011-2011msec 00:32:27.726 WRITE: bw=17.7MiB/s (18.5MB/s), 17.7MiB/s-17.7MiB/s (18.5MB/s-18.5MB/s), io=35.5MiB (37.3MB), run=2011-2011msec 00:32:27.726 ----------------------------------------------------- 00:32:27.726 Suppressions used: 00:32:27.726 count bytes template 00:32:27.726 1 58 /usr/src/fio/parse.c 00:32:27.726 1 8 libtcmalloc_minimal.so 00:32:27.726 ----------------------------------------------------- 00:32:27.726 00:32:27.726 22:15:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:28.291 22:15:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:29.222 22:15:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=d7158e67-6878-4e65-ba3d-a66c0f9d6268 00:32:29.223 22:15:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb d7158e67-6878-4e65-ba3d-a66c0f9d6268 00:32:29.223 22:15:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=d7158e67-6878-4e65-ba3d-a66c0f9d6268 00:32:29.223 22:15:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:32:29.223 22:15:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:32:29.223 22:15:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:32:29.223 22:15:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:29.481 22:15:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:32:29.481 { 00:32:29.481 "uuid": "901b0554-d09d-46db-a98d-7cfc0acb515f", 00:32:29.481 "name": "lvs_0", 00:32:29.481 "base_bdev": "Nvme0n1", 00:32:29.481 "total_data_clusters": 930, 00:32:29.481 "free_clusters": 0, 00:32:29.481 "block_size": 512, 00:32:29.481 "cluster_size": 1073741824 00:32:29.481 }, 00:32:29.481 { 00:32:29.481 "uuid": "d7158e67-6878-4e65-ba3d-a66c0f9d6268", 00:32:29.481 "name": "lvs_n_0", 00:32:29.481 "base_bdev": "5f8fb1f9-975b-4969-9a5c-d1a59cacaf88", 00:32:29.481 "total_data_clusters": 237847, 00:32:29.481 "free_clusters": 237847, 00:32:29.481 "block_size": 512, 00:32:29.481 "cluster_size": 4194304 00:32:29.481 } 00:32:29.481 ]' 00:32:29.481 22:15:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="d7158e67-6878-4e65-ba3d-a66c0f9d6268") .free_clusters' 00:32:29.481 22:15:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:32:29.481 22:15:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="d7158e67-6878-4e65-ba3d-a66c0f9d6268") .cluster_size' 00:32:29.738 22:15:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:32:29.738 22:15:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:32:29.738 22:15:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:32:29.738 951388 00:32:29.738 22:15:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:32:30.672 6fab9931-6998-484d-80e4-9e9d56c36a33 00:32:30.672 22:15:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:30.930 22:15:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:31.189 22:15:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:31.447 22:15:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:31.448 22:15:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:31.448 22:15:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:31.448 22:15:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:31.448 22:15:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:31.448 22:15:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:31.448 22:15:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:32:31.448 22:15:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:31.448 22:15:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:31.448 22:15:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:31.448 22:15:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:32:31.448 22:15:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:31.448 22:15:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:31.448 22:15:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:31.448 22:15:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:32:31.448 22:15:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:31.448 22:15:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:31.706 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:31.706 fio-3.35 00:32:31.706 Starting 1 thread 00:32:31.706 EAL: No free 2048 kB hugepages reported on node 1 00:32:34.236 00:32:34.236 test: (groupid=0, jobs=1): err= 0: pid=3629: Sat Jul 13 22:15:53 2024 00:32:34.236 read: IOPS=4359, BW=17.0MiB/s (17.9MB/s)(34.3MiB/2012msec) 00:32:34.236 slat (usec): min=2, max=219, avg= 3.62, stdev= 3.20 00:32:34.236 clat (usec): min=6390, max=27439, avg=16159.71, stdev=1391.06 00:32:34.236 lat (usec): min=6398, max=27443, avg=16163.33, stdev=1390.88 00:32:34.236 clat percentiles (usec): 00:32:34.236 | 1.00th=[12911], 5.00th=[14091], 10.00th=[14615], 20.00th=[15139], 00:32:34.236 | 30.00th=[15533], 40.00th=[15795], 50.00th=[16188], 60.00th=[16450], 00:32:34.236 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:32:34.236 | 99.00th=[19268], 99.50th=[19792], 99.90th=[24249], 99.95th=[26346], 00:32:34.236 | 99.99th=[27395] 00:32:34.236 bw ( KiB/s): min=16120, max=18056, per=99.73%, avg=17390.00, stdev=864.92, samples=4 00:32:34.236 iops : min= 4030, max= 4514, avg=4347.50, stdev=216.23, samples=4 00:32:34.236 write: IOPS=4353, BW=17.0MiB/s (17.8MB/s)(34.2MiB/2012msec); 0 zone resets 00:32:34.236 slat (usec): min=3, max=154, avg= 3.81, stdev= 2.14 00:32:34.236 clat (usec): min=3234, max=23778, avg=12918.65, stdev=1218.83 00:32:34.236 lat (usec): min=3250, max=23782, avg=12922.46, stdev=1218.74 00:32:34.236 clat percentiles (usec): 00:32:34.236 | 1.00th=[10159], 5.00th=[11076], 10.00th=[11600], 20.00th=[11994], 00:32:34.236 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12911], 60.00th=[13173], 00:32:34.236 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14353], 95.00th=[14746], 00:32:34.236 | 99.00th=[15533], 99.50th=[16319], 99.90th=[20579], 99.95th=[22152], 00:32:34.236 | 99.99th=[23725] 00:32:34.236 bw ( KiB/s): min=17160, max=17576, per=100.00%, avg=17414.00, stdev=178.21, samples=4 00:32:34.236 iops : min= 4290, max= 4394, avg=4353.50, stdev=44.55, samples=4 00:32:34.236 lat (msec) : 4=0.02%, 10=0.45%, 20=99.25%, 50=0.28% 00:32:34.236 cpu : usr=59.57%, sys=36.75%, ctx=72, majf=0, minf=1533 00:32:34.236 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:32:34.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:34.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:34.236 issued rwts: total=8771,8759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:34.236 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:34.236 00:32:34.236 Run status group 0 (all jobs): 00:32:34.236 READ: bw=17.0MiB/s (17.9MB/s), 17.0MiB/s-17.0MiB/s (17.9MB/s-17.9MB/s), io=34.3MiB (35.9MB), run=2012-2012msec 00:32:34.236 WRITE: bw=17.0MiB/s (17.8MB/s), 17.0MiB/s-17.0MiB/s (17.8MB/s-17.8MB/s), io=34.2MiB (35.9MB), run=2012-2012msec 00:32:34.493 ----------------------------------------------------- 00:32:34.493 Suppressions used: 00:32:34.493 count bytes template 00:32:34.493 1 58 /usr/src/fio/parse.c 00:32:34.493 1 8 libtcmalloc_minimal.so 00:32:34.493 ----------------------------------------------------- 00:32:34.493 00:32:34.493 22:15:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:34.751 22:15:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:34.751 22:15:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:40.009 22:15:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:40.009 22:15:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:42.596 22:16:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:42.596 22:16:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:44.495 22:16:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:44.495 22:16:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:44.495 22:16:03 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:44.495 22:16:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:44.495 22:16:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:32:44.495 22:16:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:44.495 22:16:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:32:44.495 22:16:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:44.495 22:16:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:44.495 rmmod nvme_tcp 00:32:44.495 rmmod nvme_fabrics 00:32:44.495 rmmod nvme_keyring 00:32:44.495 22:16:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:44.495 22:16:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:32:44.495 22:16:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:32:44.495 22:16:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 330 ']' 00:32:44.495 22:16:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 330 00:32:44.495 22:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 330 ']' 00:32:44.495 22:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 330 00:32:44.495 22:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:32:44.495 22:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:44.495 22:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 330 00:32:44.495 22:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:44.495 22:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:44.495 22:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 330' 00:32:44.495 killing process with pid 330 00:32:44.495 22:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 330 00:32:44.495 22:16:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 330 00:32:45.868 22:16:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:45.868 22:16:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:45.868 22:16:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:45.868 22:16:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:45.869 22:16:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:45.869 22:16:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.869 22:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:45.869 22:16:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.769 22:16:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:47.769 00:32:47.769 real 0m41.163s 00:32:47.769 user 2m36.971s 00:32:47.769 sys 0m7.790s 00:32:47.769 22:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:47.769 22:16:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.769 ************************************ 00:32:47.769 END TEST nvmf_fio_host 00:32:47.769 ************************************ 00:32:48.027 22:16:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:48.027 22:16:07 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:48.027 22:16:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:48.027 22:16:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:48.027 22:16:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:48.027 ************************************ 00:32:48.027 START TEST nvmf_failover 00:32:48.027 ************************************ 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:48.027 * Looking for test storage... 00:32:48.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:32:48.027 22:16:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:49.924 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:49.924 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:49.924 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:49.925 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:49.925 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:49.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:49.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:32:49.925 00:32:49.925 --- 10.0.0.2 ping statistics --- 00:32:49.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:49.925 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:49.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:49.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:32:49.925 00:32:49.925 --- 10.0.0.1 ping statistics --- 00:32:49.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:49.925 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=7159 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 7159 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 7159 ']' 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:49.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:49.925 22:16:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:50.183 [2024-07-13 22:16:09.381031] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:50.183 [2024-07-13 22:16:09.381181] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:50.183 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.183 [2024-07-13 22:16:09.512958] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:50.441 [2024-07-13 22:16:09.739117] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:50.441 [2024-07-13 22:16:09.739210] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:50.441 [2024-07-13 22:16:09.739239] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:50.441 [2024-07-13 22:16:09.739257] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:50.441 [2024-07-13 22:16:09.739276] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:50.441 [2024-07-13 22:16:09.739418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:50.441 [2024-07-13 22:16:09.739457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.441 [2024-07-13 22:16:09.739467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:51.005 22:16:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:51.005 22:16:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:32:51.005 22:16:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:51.005 22:16:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:51.005 22:16:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:51.005 22:16:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:51.005 22:16:10 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:51.263 [2024-07-13 22:16:10.532551] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:51.263 22:16:10 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:51.521 Malloc0 00:32:51.521 22:16:10 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:52.086 22:16:11 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:52.086 22:16:11 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:52.344 [2024-07-13 22:16:11.664530] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:52.344 22:16:11 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:52.602 [2024-07-13 22:16:11.901219] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:52.602 22:16:11 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:52.861 [2024-07-13 22:16:12.146078] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:52.861 22:16:12 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=7455 00:32:52.861 22:16:12 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:52.861 22:16:12 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:52.861 22:16:12 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 7455 /var/tmp/bdevperf.sock 00:32:52.861 22:16:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 7455 ']' 00:32:52.861 22:16:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:52.861 22:16:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:52.861 22:16:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:52.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:52.861 22:16:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:52.861 22:16:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:53.796 22:16:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:53.796 22:16:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:32:53.796 22:16:13 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:54.360 NVMe0n1 00:32:54.360 22:16:13 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:54.618 00:32:54.618 22:16:13 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=7708 00:32:54.618 22:16:13 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:54.618 22:16:13 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:55.550 22:16:14 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:55.809 [2024-07-13 22:16:15.013781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.013898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.013923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.013943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.013962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.013979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.013997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.014015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.014033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.014051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.014069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.014103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.014121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.014148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.014182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.014200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.014216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.014233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.014249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.014266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.014282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.014298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.014314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.014330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.014347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.014362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.014378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.014395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.014410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.014427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 [2024-07-13 22:16:15.014443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003880 is same with the state(5) to be set 00:32:55.809 22:16:15 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:59.144 22:16:18 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:59.144 00:32:59.144 22:16:18 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:59.402 [2024-07-13 22:16:18.746757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:32:59.402 [2024-07-13 22:16:18.746856] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:32:59.402 [2024-07-13 22:16:18.746891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:32:59.402 [2024-07-13 22:16:18.746911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:32:59.402 [2024-07-13 22:16:18.746929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:32:59.402 [2024-07-13 22:16:18.746955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:32:59.402 [2024-07-13 22:16:18.746973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:32:59.402 22:16:18 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:02.684 22:16:21 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:02.684 [2024-07-13 22:16:22.007941] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:02.684 22:16:22 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:04.058 22:16:23 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:04.058 [2024-07-13 22:16:23.257037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:33:04.058 [2024-07-13 22:16:23.257120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:33:04.058 [2024-07-13 22:16:23.257148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:33:04.058 [2024-07-13 22:16:23.257181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:33:04.058 [2024-07-13 22:16:23.257205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:33:04.058 [2024-07-13 22:16:23.257222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:33:04.058 [2024-07-13 22:16:23.257239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:33:04.058 [2024-07-13 22:16:23.257265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:33:04.058 [2024-07-13 22:16:23.257281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:33:04.058 [2024-07-13 22:16:23.257299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:33:04.058 [2024-07-13 22:16:23.257315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004c80 is same with the state(5) to be set 00:33:04.058 22:16:23 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 7708 00:33:10.627 0 00:33:10.627 22:16:28 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 7455 00:33:10.627 22:16:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 7455 ']' 00:33:10.627 22:16:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 7455 00:33:10.627 22:16:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:33:10.627 22:16:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:10.627 22:16:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 7455 00:33:10.627 22:16:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:10.627 22:16:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:10.627 22:16:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 7455' 00:33:10.627 killing process with pid 7455 00:33:10.627 22:16:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 7455 00:33:10.627 22:16:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 7455 00:33:10.627 22:16:29 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:10.627 [2024-07-13 22:16:12.239652] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:10.627 [2024-07-13 22:16:12.239820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid7455 ] 00:33:10.627 EAL: No free 2048 kB hugepages reported on node 1 00:33:10.627 [2024-07-13 22:16:12.375171] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.627 [2024-07-13 22:16:12.609461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:10.627 Running I/O for 15 seconds... 00:33:10.627 [2024-07-13 22:16:15.015267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.627 [2024-07-13 22:16:15.015324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.015353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.627 [2024-07-13 22:16:15.015376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.015398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.627 [2024-07-13 22:16:15.015420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.015442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.627 [2024-07-13 22:16:15.015463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.015484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:10.627 [2024-07-13 22:16:15.015578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.015609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.015661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:56536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.015688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.015714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.015737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.015760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.015782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.015806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.015829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.015859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.015892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.015917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.015946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.015971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:56584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.015993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.016017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:56592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.016039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.016062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.016084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.016124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.016147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.016196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.016219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.016242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:56624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.016263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.016285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:56632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.016305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.016327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:56640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.016348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.016371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.016391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.016414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:56656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.016435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.016458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:56664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.016479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.016502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.016523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.016550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:56680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.016571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.016594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:56688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.016615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.016637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.016657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.016680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.016701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.016723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:56712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.016744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.016767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:56720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.016788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.016810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.016831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.016854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:56736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.016910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.016936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:56744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.016958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.627 [2024-07-13 22:16:15.016982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.627 [2024-07-13 22:16:15.017004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.017026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:56760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.017048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.017071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.017093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.017116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:56776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.017142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.017174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.017212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.017235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.017256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.017279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.017299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.017322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:56808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.017343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.017366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.017387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.017409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.017431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.017453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:56832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.017474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.017496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:56840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.017517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.017539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:56848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.017560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.017582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:56856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.017603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.017625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:56864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.017646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.017669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.017690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.017712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.017737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.017761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:56360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.628 [2024-07-13 22:16:15.017782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.017805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:56368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.628 [2024-07-13 22:16:15.017826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.017849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:56376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.628 [2024-07-13 22:16:15.017895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.017924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:56384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.628 [2024-07-13 22:16:15.017946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.017970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:56392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.628 [2024-07-13 22:16:15.017992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.018016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:56888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.018037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.018060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:56896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.018082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.018105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:56904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.018127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.018151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:56912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.018196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.018221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:56920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.018241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.018263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:56928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.018285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.018309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:56936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.018329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.018356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:56944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.018393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.018418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:56952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.018439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.018461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.018482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.018505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:56968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.018527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.018549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:56976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.018569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.018591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.018611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.018634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:56992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.018654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.018675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.018695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.018717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.018738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.018759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.018780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.018802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:57024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.018821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.018874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.018899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.018939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.018965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.018988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.628 [2024-07-13 22:16:15.019010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.628 [2024-07-13 22:16:15.019034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.019055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.019078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:57064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.019100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.019123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.019160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.019219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.019241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.019264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.019284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.019305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.019326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.019348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:57104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.019368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.019389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.019409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.019431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:57120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.019467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.019489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.019510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.019534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:56400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.629 [2024-07-13 22:16:15.019555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.019585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:56408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.629 [2024-07-13 22:16:15.019607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.019629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:56416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.629 [2024-07-13 22:16:15.019651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.019673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:56424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.629 [2024-07-13 22:16:15.019695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.019717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.629 [2024-07-13 22:16:15.019738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.019761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:56440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.629 [2024-07-13 22:16:15.019781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.019803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:56448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.629 [2024-07-13 22:16:15.019824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.019846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.019900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.019926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:57144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.019950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.019972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:57152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.019994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.020018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.020040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.020065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.020087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.020110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.020132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.020165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.020202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.020230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.020253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.020275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:57200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.020296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.020319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.020340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.020363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.020384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.020406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.020428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.020450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.020470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.020492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.020512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.020534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.020555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.020577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.020598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.020621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.020641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.020664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:57272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.020685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.020707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.020728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.020751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.020776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.020800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:57296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.020821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.020843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.020899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.020927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.020949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.020973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:57320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.020994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.021017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.021039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.629 [2024-07-13 22:16:15.021062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.629 [2024-07-13 22:16:15.021083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:15.021107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.630 [2024-07-13 22:16:15.021129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:15.021163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.630 [2024-07-13 22:16:15.021201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:15.021224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:56456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.630 [2024-07-13 22:16:15.021245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:15.021267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:56464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.630 [2024-07-13 22:16:15.021288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:15.021310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.630 [2024-07-13 22:16:15.021330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:15.021352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:56480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.630 [2024-07-13 22:16:15.021373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:15.021400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:56488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.630 [2024-07-13 22:16:15.021422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:15.021445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.630 [2024-07-13 22:16:15.021466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:15.021489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:56504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.630 [2024-07-13 22:16:15.021510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:15.021532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.630 [2024-07-13 22:16:15.021553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:15.021575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.630 [2024-07-13 22:16:15.021596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:15.021618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:57368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.630 [2024-07-13 22:16:15.021638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:15.021661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.630 [2024-07-13 22:16:15.021683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:15.021729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.630 [2024-07-13 22:16:15.021757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.630 [2024-07-13 22:16:15.021777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56520 len:8 PRP1 0x0 PRP2 0x0 00:33:10.630 [2024-07-13 22:16:15.021797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:15.022105] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2f00 was disconnected and freed. reset controller. 00:33:10.630 [2024-07-13 22:16:15.022138] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:10.630 [2024-07-13 22:16:15.022187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.630 [2024-07-13 22:16:15.026023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.630 [2024-07-13 22:16:15.026081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:10.630 [2024-07-13 22:16:15.069154] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:10.630 [2024-07-13 22:16:18.747927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.630 [2024-07-13 22:16:18.747987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:18.748030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.630 [2024-07-13 22:16:18.748062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:18.748089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.630 [2024-07-13 22:16:18.748113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:18.748155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.630 [2024-07-13 22:16:18.748192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:18.748214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.630 [2024-07-13 22:16:18.748234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:18.748256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.630 [2024-07-13 22:16:18.748276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:18.748298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.630 [2024-07-13 22:16:18.748318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:18.748355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.630 [2024-07-13 22:16:18.748376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:18.748398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.630 [2024-07-13 22:16:18.748420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:18.748442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.630 [2024-07-13 22:16:18.748463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:18.748485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.630 [2024-07-13 22:16:18.748506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:18.748529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.630 [2024-07-13 22:16:18.748550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:18.748573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.630 [2024-07-13 22:16:18.748594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:18.748616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.630 [2024-07-13 22:16:18.748638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:18.748665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.630 [2024-07-13 22:16:18.748687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:18.748710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.630 [2024-07-13 22:16:18.748732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:18.748754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.630 [2024-07-13 22:16:18.748775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:18.748798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.630 [2024-07-13 22:16:18.748820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:18.748842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:119792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.630 [2024-07-13 22:16:18.748889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:18.748916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:119800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.630 [2024-07-13 22:16:18.748939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:18.748962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.630 [2024-07-13 22:16:18.748984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:18.749007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:119816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.630 [2024-07-13 22:16:18.749029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:18.749052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:119824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.630 [2024-07-13 22:16:18.749073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:18.749096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:119832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.630 [2024-07-13 22:16:18.749118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:18.749151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.630 [2024-07-13 22:16:18.749189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.630 [2024-07-13 22:16:18.749213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.749234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.749257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.749283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.749306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.749328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.749351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.749372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.749395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.749417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.749440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.749462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.749484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.749505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.749528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.749550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.749573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.749595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.749618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.749639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.749662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.749684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.749707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.749728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.749752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.749774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.749798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.749819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.749842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.749890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.749919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.749942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.749966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.749987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.750010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.750032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.750055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.750077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.750100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.750122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.750147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.750169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.750209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.750231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.750254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.750274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.750297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.750318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.750341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.750363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.750385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.750406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.750429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.750450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.750477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.750499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.750521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.750543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.750567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.750588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.750611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.750631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.750654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.750675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.750697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.750719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.631 [2024-07-13 22:16:18.750742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.631 [2024-07-13 22:16:18.750762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.750785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.632 [2024-07-13 22:16:18.750806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.750828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.632 [2024-07-13 22:16:18.750864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.750898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.632 [2024-07-13 22:16:18.750920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.750943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.632 [2024-07-13 22:16:18.750965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.750987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:119840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.632 [2024-07-13 22:16:18.751009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.751032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:119848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.632 [2024-07-13 22:16:18.751057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.751082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:119856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.632 [2024-07-13 22:16:18.751104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.751127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.632 [2024-07-13 22:16:18.751163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.751200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:119872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.632 [2024-07-13 22:16:18.751222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.751245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.632 [2024-07-13 22:16:18.751266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.751288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.632 [2024-07-13 22:16:18.751308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.751332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.632 [2024-07-13 22:16:18.751355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.751377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.632 [2024-07-13 22:16:18.751398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.751420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.632 [2024-07-13 22:16:18.751441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.751464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.632 [2024-07-13 22:16:18.751485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.751507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.632 [2024-07-13 22:16:18.751528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.751551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.632 [2024-07-13 22:16:18.751572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.751594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.632 [2024-07-13 22:16:18.751615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.751642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.632 [2024-07-13 22:16:18.751664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.751686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.632 [2024-07-13 22:16:18.751707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.751729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.632 [2024-07-13 22:16:18.751750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.751773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.632 [2024-07-13 22:16:18.751794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.751816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.632 [2024-07-13 22:16:18.751837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.751887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.632 [2024-07-13 22:16:18.751911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.751936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.632 [2024-07-13 22:16:18.751958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.751981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.632 [2024-07-13 22:16:18.752003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.752026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.632 [2024-07-13 22:16:18.752047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.752071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.632 [2024-07-13 22:16:18.752093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.752116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.632 [2024-07-13 22:16:18.752138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.752162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.632 [2024-07-13 22:16:18.752184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.752224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.632 [2024-07-13 22:16:18.752249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.752300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.632 [2024-07-13 22:16:18.752328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120696 len:8 PRP1 0x0 PRP2 0x0 00:33:10.632 [2024-07-13 22:16:18.752349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.752453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.632 [2024-07-13 22:16:18.752483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.752506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.632 [2024-07-13 22:16:18.752528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.752549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.632 [2024-07-13 22:16:18.752569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.752590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.632 [2024-07-13 22:16:18.752610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.752630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:10.632 [2024-07-13 22:16:18.752917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.632 [2024-07-13 22:16:18.752945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.632 [2024-07-13 22:16:18.752965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120704 len:8 PRP1 0x0 PRP2 0x0 00:33:10.632 [2024-07-13 22:16:18.752985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.753021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.632 [2024-07-13 22:16:18.753043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.632 [2024-07-13 22:16:18.753061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120712 len:8 PRP1 0x0 PRP2 0x0 00:33:10.632 [2024-07-13 22:16:18.753081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.753101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.632 [2024-07-13 22:16:18.753118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.632 [2024-07-13 22:16:18.753136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120720 len:8 PRP1 0x0 PRP2 0x0 00:33:10.632 [2024-07-13 22:16:18.753155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.632 [2024-07-13 22:16:18.753174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.632 [2024-07-13 22:16:18.753191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.633 [2024-07-13 22:16:18.753223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120728 len:8 PRP1 0x0 PRP2 0x0 00:33:10.633 [2024-07-13 22:16:18.753243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.633 [2024-07-13 22:16:18.753267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.633 [2024-07-13 22:16:18.753285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.633 [2024-07-13 22:16:18.753302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120736 len:8 PRP1 0x0 PRP2 0x0 00:33:10.633 [2024-07-13 22:16:18.753320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.633 [2024-07-13 22:16:18.753339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.633 [2024-07-13 22:16:18.753355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.633 [2024-07-13 22:16:18.753372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119904 len:8 PRP1 0x0 PRP2 0x0 00:33:10.633 [2024-07-13 22:16:18.753390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.633 [2024-07-13 22:16:18.753408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.633 [2024-07-13 22:16:18.753425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.633 [2024-07-13 22:16:18.753441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119912 len:8 PRP1 0x0 PRP2 0x0 00:33:10.633 [2024-07-13 22:16:18.753460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.633 [2024-07-13 22:16:18.753478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.633 [2024-07-13 22:16:18.753495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.633 [2024-07-13 22:16:18.753511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119920 len:8 PRP1 0x0 PRP2 0x0 00:33:10.633 [2024-07-13 22:16:18.753529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.633 [2024-07-13 22:16:18.753548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.633 [2024-07-13 22:16:18.753563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.633 [2024-07-13 22:16:18.753580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119928 len:8 PRP1 0x0 PRP2 0x0 00:33:10.633 [2024-07-13 22:16:18.753598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.633 [2024-07-13 22:16:18.753617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.633 [2024-07-13 22:16:18.753633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.633 [2024-07-13 22:16:18.753649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119936 len:8 PRP1 0x0 PRP2 0x0 00:33:10.633 [2024-07-13 22:16:18.753667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.633 [2024-07-13 22:16:18.753686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.633 [2024-07-13 22:16:18.753702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.633 [2024-07-13 22:16:18.753718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119944 len:8 PRP1 0x0 PRP2 0x0 00:33:10.633 [2024-07-13 22:16:18.753736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.633 [2024-07-13 22:16:18.753755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.633 [2024-07-13 22:16:18.753771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.633 [2024-07-13 22:16:18.753788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119952 len:8 PRP1 0x0 PRP2 0x0 00:33:10.633 [2024-07-13 22:16:18.753811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.633 [2024-07-13 22:16:18.753830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.633 [2024-07-13 22:16:18.753861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.633 [2024-07-13 22:16:18.753888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119960 len:8 PRP1 0x0 PRP2 0x0 00:33:10.633 [2024-07-13 22:16:18.753908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.633 [2024-07-13 22:16:18.753928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.633 [2024-07-13 22:16:18.753945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.633 [2024-07-13 22:16:18.753962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120744 len:8 PRP1 0x0 PRP2 0x0 00:33:10.633 [2024-07-13 22:16:18.753981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.633 [2024-07-13 22:16:18.754000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.633 [2024-07-13 22:16:18.754017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.633 [2024-07-13 22:16:18.754034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120752 len:8 PRP1 0x0 PRP2 0x0 00:33:10.633 [2024-07-13 22:16:18.754052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.633 [2024-07-13 22:16:18.754072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.633 [2024-07-13 22:16:18.754089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.633 [2024-07-13 22:16:18.754107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120760 len:8 PRP1 0x0 PRP2 0x0 00:33:10.633 [2024-07-13 22:16:18.754126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.633 [2024-07-13 22:16:18.754145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.633 [2024-07-13 22:16:18.754162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.633 [2024-07-13 22:16:18.754179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120768 len:8 PRP1 0x0 PRP2 0x0 00:33:10.633 [2024-07-13 22:16:18.754199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.633 [2024-07-13 22:16:18.754218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.633 [2024-07-13 22:16:18.754234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.633 [2024-07-13 22:16:18.754264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120776 len:8 PRP1 0x0 PRP2 0x0 00:33:10.633 [2024-07-13 22:16:18.754299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.633 [2024-07-13 22:16:18.754320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.633 [2024-07-13 22:16:18.754337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.633 [2024-07-13 22:16:18.754353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120784 len:8 PRP1 0x0 PRP2 0x0 00:33:10.633 [2024-07-13 22:16:18.754372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.633 [2024-07-13 22:16:18.754398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.633 [2024-07-13 22:16:18.754416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.633 [2024-07-13 22:16:18.754436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120792 len:8 PRP1 0x0 PRP2 0x0 00:33:10.633 [2024-07-13 22:16:18.754455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.633 [2024-07-13 22:16:18.754474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.633 [2024-07-13 22:16:18.754490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.633 [2024-07-13 22:16:18.754506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120800 len:8 PRP1 0x0 PRP2 0x0 00:33:10.633 [2024-07-13 22:16:18.754524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.633 [2024-07-13 22:16:18.754543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.633 [2024-07-13 22:16:18.754560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.633 [2024-07-13 22:16:18.754576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119968 len:8 PRP1 0x0 PRP2 0x0 00:33:10.633 [2024-07-13 22:16:18.754594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.633 [2024-07-13 22:16:18.754614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.633 [2024-07-13 22:16:18.754630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.633 [2024-07-13 22:16:18.754647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119976 len:8 PRP1 0x0 PRP2 0x0 00:33:10.633 [2024-07-13 22:16:18.754664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.633 [2024-07-13 22:16:18.754683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.633 [2024-07-13 22:16:18.754700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.633 [2024-07-13 22:16:18.754717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119984 len:8 PRP1 0x0 PRP2 0x0 00:33:10.633 [2024-07-13 22:16:18.754735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.633 [2024-07-13 22:16:18.754753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.633 [2024-07-13 22:16:18.754769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.633 [2024-07-13 22:16:18.754786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119992 len:8 PRP1 0x0 PRP2 0x0 00:33:10.633 [2024-07-13 22:16:18.754805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.633 [2024-07-13 22:16:18.754823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.633 [2024-07-13 22:16:18.754839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.633 [2024-07-13 22:16:18.754881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120000 len:8 PRP1 0x0 PRP2 0x0 00:33:10.633 [2024-07-13 22:16:18.754903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.633 [2024-07-13 22:16:18.754923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.633 [2024-07-13 22:16:18.754941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.633 [2024-07-13 22:16:18.754959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120008 len:8 PRP1 0x0 PRP2 0x0 00:33:10.633 [2024-07-13 22:16:18.754977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.633 [2024-07-13 22:16:18.755006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.633 [2024-07-13 22:16:18.755025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.633 [2024-07-13 22:16:18.755043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120016 len:8 PRP1 0x0 PRP2 0x0 00:33:10.633 [2024-07-13 22:16:18.755062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.633 [2024-07-13 22:16:18.755081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.633 [2024-07-13 22:16:18.755098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.633 [2024-07-13 22:16:18.755115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120024 len:8 PRP1 0x0 PRP2 0x0 00:33:10.634 [2024-07-13 22:16:18.755134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.634 [2024-07-13 22:16:18.755153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.634 [2024-07-13 22:16:18.755186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.634 [2024-07-13 22:16:18.755203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120032 len:8 PRP1 0x0 PRP2 0x0 00:33:10.634 [2024-07-13 22:16:18.755221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.634 [2024-07-13 22:16:18.755240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.634 [2024-07-13 22:16:18.755256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.634 [2024-07-13 22:16:18.755274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120040 len:8 PRP1 0x0 PRP2 0x0 00:33:10.634 [2024-07-13 22:16:18.755292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.634 [2024-07-13 22:16:18.755311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.634 [2024-07-13 22:16:18.755327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.634 [2024-07-13 22:16:18.755344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120048 len:8 PRP1 0x0 PRP2 0x0 00:33:10.634 [2024-07-13 22:16:18.755362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.634 [2024-07-13 22:16:18.755380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.634 [2024-07-13 22:16:18.755395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.634 [2024-07-13 22:16:18.755412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120056 len:8 PRP1 0x0 PRP2 0x0 00:33:10.634 [2024-07-13 22:16:18.755432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.634 [2024-07-13 22:16:18.755450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.634 [2024-07-13 22:16:18.755466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.634 [2024-07-13 22:16:18.755484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120064 len:8 PRP1 0x0 PRP2 0x0 00:33:10.634 [2024-07-13 22:16:18.755503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.634 [2024-07-13 22:16:18.755523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.634 [2024-07-13 22:16:18.755539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.634 [2024-07-13 22:16:18.755556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120072 len:8 PRP1 0x0 PRP2 0x0 00:33:10.634 [2024-07-13 22:16:18.755578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.634 [2024-07-13 22:16:18.755604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.634 [2024-07-13 22:16:18.755621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.634 [2024-07-13 22:16:18.755638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120080 len:8 PRP1 0x0 PRP2 0x0 00:33:10.634 [2024-07-13 22:16:18.755657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.634 [2024-07-13 22:16:18.755675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.634 [2024-07-13 22:16:18.755692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.634 [2024-07-13 22:16:18.755708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120088 len:8 PRP1 0x0 PRP2 0x0 00:33:10.634 [2024-07-13 22:16:18.755727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.634 [2024-07-13 22:16:18.755745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.634 [2024-07-13 22:16:18.755761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.634 [2024-07-13 22:16:18.755777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120096 len:8 PRP1 0x0 PRP2 0x0 00:33:10.634 [2024-07-13 22:16:18.755796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.634 [2024-07-13 22:16:18.755814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.634 [2024-07-13 22:16:18.755830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.634 [2024-07-13 22:16:18.755863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120104 len:8 PRP1 0x0 PRP2 0x0 00:33:10.634 [2024-07-13 22:16:18.755893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.634 [2024-07-13 22:16:18.755913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.634 [2024-07-13 22:16:18.755931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.634 [2024-07-13 22:16:18.755948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120112 len:8 PRP1 0x0 PRP2 0x0 00:33:10.634 [2024-07-13 22:16:18.755967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.634 [2024-07-13 22:16:18.755986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.634 [2024-07-13 22:16:18.756002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.634 [2024-07-13 22:16:18.756020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120120 len:8 PRP1 0x0 PRP2 0x0 00:33:10.634 [2024-07-13 22:16:18.756039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.634 [2024-07-13 22:16:18.756058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.634 [2024-07-13 22:16:18.756075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.634 [2024-07-13 22:16:18.756092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120128 len:8 PRP1 0x0 PRP2 0x0 00:33:10.634 [2024-07-13 22:16:18.756111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.634 [2024-07-13 22:16:18.756130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.634 [2024-07-13 22:16:18.756162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.634 [2024-07-13 22:16:18.756183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120136 len:8 PRP1 0x0 PRP2 0x0 00:33:10.634 [2024-07-13 22:16:18.756203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.634 [2024-07-13 22:16:18.756228] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.634 [2024-07-13 22:16:18.756245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.634 [2024-07-13 22:16:18.756262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120144 len:8 PRP1 0x0 PRP2 0x0 00:33:10.634 [2024-07-13 22:16:18.756280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.634 [2024-07-13 22:16:18.756298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.634 [2024-07-13 22:16:18.756314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.634 [2024-07-13 22:16:18.756331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120152 len:8 PRP1 0x0 PRP2 0x0 00:33:10.634 [2024-07-13 22:16:18.756349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.634 [2024-07-13 22:16:18.756368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.634 [2024-07-13 22:16:18.756384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.634 [2024-07-13 22:16:18.756401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120160 len:8 PRP1 0x0 PRP2 0x0 00:33:10.634 [2024-07-13 22:16:18.756420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.634 [2024-07-13 22:16:18.756438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.634 [2024-07-13 22:16:18.756454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.634 [2024-07-13 22:16:18.756471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120168 len:8 PRP1 0x0 PRP2 0x0 00:33:10.634 [2024-07-13 22:16:18.756489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.634 [2024-07-13 22:16:18.756507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.634 [2024-07-13 22:16:18.756523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.634 [2024-07-13 22:16:18.756540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120176 len:8 PRP1 0x0 PRP2 0x0 00:33:10.634 [2024-07-13 22:16:18.756557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.634 [2024-07-13 22:16:18.756576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.634 [2024-07-13 22:16:18.756592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.634 [2024-07-13 22:16:18.756609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120184 len:8 PRP1 0x0 PRP2 0x0 00:33:10.634 [2024-07-13 22:16:18.756626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.634 [2024-07-13 22:16:18.756644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.634 [2024-07-13 22:16:18.756661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.634 [2024-07-13 22:16:18.756692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120192 len:8 PRP1 0x0 PRP2 0x0 00:33:10.634 [2024-07-13 22:16:18.756711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.634 [2024-07-13 22:16:18.756730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.634 [2024-07-13 22:16:18.756751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.634 [2024-07-13 22:16:18.756768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120200 len:8 PRP1 0x0 PRP2 0x0 00:33:10.634 [2024-07-13 22:16:18.756787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.634 [2024-07-13 22:16:18.756811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.634 [2024-07-13 22:16:18.756829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.634 [2024-07-13 22:16:18.756860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120208 len:8 PRP1 0x0 PRP2 0x0 00:33:10.634 [2024-07-13 22:16:18.756890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.634 [2024-07-13 22:16:18.756911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.634 [2024-07-13 22:16:18.756928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.634 [2024-07-13 22:16:18.756946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120216 len:8 PRP1 0x0 PRP2 0x0 00:33:10.634 [2024-07-13 22:16:18.756965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.634 [2024-07-13 22:16:18.756984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.634 [2024-07-13 22:16:18.757001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.634 [2024-07-13 22:16:18.757019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120224 len:8 PRP1 0x0 PRP2 0x0 00:33:10.635 [2024-07-13 22:16:18.757038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.635 [2024-07-13 22:16:18.757057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.635 [2024-07-13 22:16:18.757074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.635 [2024-07-13 22:16:18.757091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119784 len:8 PRP1 0x0 PRP2 0x0 00:33:10.635 [2024-07-13 22:16:18.757110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.635 [2024-07-13 22:16:18.757128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.635 [2024-07-13 22:16:18.757145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.635 [2024-07-13 22:16:18.757162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119792 len:8 PRP1 0x0 PRP2 0x0 00:33:10.635 [2024-07-13 22:16:18.757180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.635 [2024-07-13 22:16:18.757199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.635 [2024-07-13 22:16:18.757240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.635 [2024-07-13 22:16:18.757257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119800 len:8 PRP1 0x0 PRP2 0x0 00:33:10.635 [2024-07-13 22:16:18.757275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.635 [2024-07-13 22:16:18.757294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.635 [2024-07-13 22:16:18.757311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.635 [2024-07-13 22:16:18.757328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119808 len:8 PRP1 0x0 PRP2 0x0 00:33:10.635 [2024-07-13 22:16:18.757346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.635 [2024-07-13 22:16:18.757369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.635 [2024-07-13 22:16:18.757386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.635 [2024-07-13 22:16:18.770475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119816 len:8 PRP1 0x0 PRP2 0x0 00:33:10.635 [2024-07-13 22:16:18.770516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.635 [2024-07-13 22:16:18.770542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.635 [2024-07-13 22:16:18.770561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.635 [2024-07-13 22:16:18.770579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119824 len:8 PRP1 0x0 PRP2 0x0 00:33:10.635 [2024-07-13 22:16:18.770598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.635 [2024-07-13 22:16:18.770617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.635 [2024-07-13 22:16:18.770633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.635 [2024-07-13 22:16:18.770650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119832 len:8 PRP1 0x0 PRP2 0x0 00:33:10.635 [2024-07-13 22:16:18.770667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.635 [2024-07-13 22:16:18.770686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.635 [2024-07-13 22:16:18.770702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.635 [2024-07-13 22:16:18.770719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120232 len:8 PRP1 0x0 PRP2 0x0 00:33:10.635 [2024-07-13 22:16:18.770737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.635 [2024-07-13 22:16:18.770755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.635 [2024-07-13 22:16:18.770770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.635 [2024-07-13 22:16:18.770786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120240 len:8 PRP1 0x0 PRP2 0x0 00:33:10.635 [2024-07-13 22:16:18.770804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.635 [2024-07-13 22:16:18.770822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.635 [2024-07-13 22:16:18.770838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.635 [2024-07-13 22:16:18.770881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120248 len:8 PRP1 0x0 PRP2 0x0 00:33:10.635 [2024-07-13 22:16:18.770912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.635 [2024-07-13 22:16:18.770933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.635 [2024-07-13 22:16:18.770950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.635 [2024-07-13 22:16:18.770968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120256 len:8 PRP1 0x0 PRP2 0x0 00:33:10.635 [2024-07-13 22:16:18.770987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.635 [2024-07-13 22:16:18.771006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.635 [2024-07-13 22:16:18.771023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.635 [2024-07-13 22:16:18.771040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120264 len:8 PRP1 0x0 PRP2 0x0 00:33:10.635 [2024-07-13 22:16:18.771066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.635 [2024-07-13 22:16:18.771086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.635 [2024-07-13 22:16:18.771103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.635 [2024-07-13 22:16:18.771120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120272 len:8 PRP1 0x0 PRP2 0x0 00:33:10.635 [2024-07-13 22:16:18.771139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.635 [2024-07-13 22:16:18.771178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.635 [2024-07-13 22:16:18.771196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.635 [2024-07-13 22:16:18.771213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120280 len:8 PRP1 0x0 PRP2 0x0 00:33:10.635 [2024-07-13 22:16:18.771247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.635 [2024-07-13 22:16:18.771267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.635 [2024-07-13 22:16:18.771283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.635 [2024-07-13 22:16:18.771299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120288 len:8 PRP1 0x0 PRP2 0x0 00:33:10.635 [2024-07-13 22:16:18.771317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.635 [2024-07-13 22:16:18.771334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.635 [2024-07-13 22:16:18.771350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.635 [2024-07-13 22:16:18.771366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120296 len:8 PRP1 0x0 PRP2 0x0 00:33:10.635 [2024-07-13 22:16:18.771385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.635 [2024-07-13 22:16:18.771402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.635 [2024-07-13 22:16:18.771418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.635 [2024-07-13 22:16:18.771434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120304 len:8 PRP1 0x0 PRP2 0x0 00:33:10.635 [2024-07-13 22:16:18.771451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.635 [2024-07-13 22:16:18.771468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.635 [2024-07-13 22:16:18.771484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.635 [2024-07-13 22:16:18.771501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120312 len:8 PRP1 0x0 PRP2 0x0 00:33:10.635 [2024-07-13 22:16:18.771519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.635 [2024-07-13 22:16:18.771536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.635 [2024-07-13 22:16:18.771552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.635 [2024-07-13 22:16:18.771568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120320 len:8 PRP1 0x0 PRP2 0x0 00:33:10.635 [2024-07-13 22:16:18.771586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.635 [2024-07-13 22:16:18.771604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.635 [2024-07-13 22:16:18.771624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.635 [2024-07-13 22:16:18.771642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120328 len:8 PRP1 0x0 PRP2 0x0 00:33:10.635 [2024-07-13 22:16:18.771660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.635 [2024-07-13 22:16:18.771677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.635 [2024-07-13 22:16:18.771694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.635 [2024-07-13 22:16:18.771710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120336 len:8 PRP1 0x0 PRP2 0x0 00:33:10.635 [2024-07-13 22:16:18.771728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.636 [2024-07-13 22:16:18.771746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.636 [2024-07-13 22:16:18.771762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.636 [2024-07-13 22:16:18.771779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120344 len:8 PRP1 0x0 PRP2 0x0 00:33:10.636 [2024-07-13 22:16:18.771797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.636 [2024-07-13 22:16:18.771817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.636 [2024-07-13 22:16:18.771833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.636 [2024-07-13 22:16:18.771873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120352 len:8 PRP1 0x0 PRP2 0x0 00:33:10.636 [2024-07-13 22:16:18.771907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.636 [2024-07-13 22:16:18.771944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.636 [2024-07-13 22:16:18.771960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.636 [2024-07-13 22:16:18.771978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120360 len:8 PRP1 0x0 PRP2 0x0 00:33:10.636 [2024-07-13 22:16:18.771998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.636 [2024-07-13 22:16:18.772017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.636 [2024-07-13 22:16:18.772034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.636 [2024-07-13 22:16:18.772052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120368 len:8 PRP1 0x0 PRP2 0x0 00:33:10.636 [2024-07-13 22:16:18.772070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.636 [2024-07-13 22:16:18.772090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.636 [2024-07-13 22:16:18.772107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.636 [2024-07-13 22:16:18.772124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120376 len:8 PRP1 0x0 PRP2 0x0 00:33:10.636 [2024-07-13 22:16:18.772159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.636 [2024-07-13 22:16:18.772188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.636 [2024-07-13 22:16:18.772205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.636 [2024-07-13 22:16:18.772238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120384 len:8 PRP1 0x0 PRP2 0x0 00:33:10.636 [2024-07-13 22:16:18.772257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.636 [2024-07-13 22:16:18.772280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.636 [2024-07-13 22:16:18.772297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.636 [2024-07-13 22:16:18.772328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120392 len:8 PRP1 0x0 PRP2 0x0 00:33:10.636 [2024-07-13 22:16:18.772347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.636 [2024-07-13 22:16:18.772366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.636 [2024-07-13 22:16:18.772383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.636 [2024-07-13 22:16:18.772400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120400 len:8 PRP1 0x0 PRP2 0x0 00:33:10.636 [2024-07-13 22:16:18.772417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.636 [2024-07-13 22:16:18.772436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.636 [2024-07-13 22:16:18.772451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.636 [2024-07-13 22:16:18.772468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120408 len:8 PRP1 0x0 PRP2 0x0 00:33:10.636 [2024-07-13 22:16:18.772486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.636 [2024-07-13 22:16:18.772504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.636 [2024-07-13 22:16:18.772519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.636 [2024-07-13 22:16:18.772535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120416 len:8 PRP1 0x0 PRP2 0x0 00:33:10.636 [2024-07-13 22:16:18.772552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.636 [2024-07-13 22:16:18.772570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.636 [2024-07-13 22:16:18.772586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.636 [2024-07-13 22:16:18.772603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120424 len:8 PRP1 0x0 PRP2 0x0 00:33:10.636 [2024-07-13 22:16:18.772621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.636 [2024-07-13 22:16:18.772638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.636 [2024-07-13 22:16:18.772654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.636 [2024-07-13 22:16:18.772670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120432 len:8 PRP1 0x0 PRP2 0x0 00:33:10.636 [2024-07-13 22:16:18.772688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.636 [2024-07-13 22:16:18.772706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.636 [2024-07-13 22:16:18.772722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.636 [2024-07-13 22:16:18.772738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120440 len:8 PRP1 0x0 PRP2 0x0 00:33:10.636 [2024-07-13 22:16:18.772757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.636 [2024-07-13 22:16:18.772776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.636 [2024-07-13 22:16:18.772792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.636 [2024-07-13 22:16:18.772808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120448 len:8 PRP1 0x0 PRP2 0x0 00:33:10.636 [2024-07-13 22:16:18.772830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.636 [2024-07-13 22:16:18.772875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.636 [2024-07-13 22:16:18.772895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.636 [2024-07-13 22:16:18.772939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120456 len:8 PRP1 0x0 PRP2 0x0 00:33:10.636 [2024-07-13 22:16:18.772959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.636 [2024-07-13 22:16:18.772979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.636 [2024-07-13 22:16:18.772996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.636 [2024-07-13 22:16:18.773013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120464 len:8 PRP1 0x0 PRP2 0x0 00:33:10.636 [2024-07-13 22:16:18.773032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.636 [2024-07-13 22:16:18.773053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.636 [2024-07-13 22:16:18.773070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.636 [2024-07-13 22:16:18.773088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120472 len:8 PRP1 0x0 PRP2 0x0 00:33:10.636 [2024-07-13 22:16:18.773106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.636 [2024-07-13 22:16:18.773125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.636 [2024-07-13 22:16:18.773156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.636 [2024-07-13 22:16:18.773184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120480 len:8 PRP1 0x0 PRP2 0x0 00:33:10.636 [2024-07-13 22:16:18.773203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.636 [2024-07-13 22:16:18.773237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.636 [2024-07-13 22:16:18.773252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.636 [2024-07-13 22:16:18.773270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120488 len:8 PRP1 0x0 PRP2 0x0 00:33:10.636 [2024-07-13 22:16:18.773288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.636 [2024-07-13 22:16:18.773306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.636 [2024-07-13 22:16:18.773321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.636 [2024-07-13 22:16:18.773339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120496 len:8 PRP1 0x0 PRP2 0x0 00:33:10.636 [2024-07-13 22:16:18.773357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.636 [2024-07-13 22:16:18.773376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.636 [2024-07-13 22:16:18.773391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.636 [2024-07-13 22:16:18.773407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120504 len:8 PRP1 0x0 PRP2 0x0 00:33:10.636 [2024-07-13 22:16:18.773425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.636 [2024-07-13 22:16:18.773443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.636 [2024-07-13 22:16:18.773458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.636 [2024-07-13 22:16:18.773478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120512 len:8 PRP1 0x0 PRP2 0x0 00:33:10.636 [2024-07-13 22:16:18.773497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.636 [2024-07-13 22:16:18.773515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.636 [2024-07-13 22:16:18.773530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.636 [2024-07-13 22:16:18.773547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120520 len:8 PRP1 0x0 PRP2 0x0 00:33:10.636 [2024-07-13 22:16:18.773564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.636 [2024-07-13 22:16:18.773582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.636 [2024-07-13 22:16:18.773597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.636 [2024-07-13 22:16:18.773613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120528 len:8 PRP1 0x0 PRP2 0x0 00:33:10.636 [2024-07-13 22:16:18.773632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.636 [2024-07-13 22:16:18.773650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.636 [2024-07-13 22:16:18.773665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.636 [2024-07-13 22:16:18.773682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120536 len:8 PRP1 0x0 PRP2 0x0 00:33:10.636 [2024-07-13 22:16:18.773700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.637 [2024-07-13 22:16:18.773718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.637 [2024-07-13 22:16:18.773734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.637 [2024-07-13 22:16:18.773751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119840 len:8 PRP1 0x0 PRP2 0x0 00:33:10.637 [2024-07-13 22:16:18.773769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.637 [2024-07-13 22:16:18.773787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.637 [2024-07-13 22:16:18.773802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.637 [2024-07-13 22:16:18.773818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119848 len:8 PRP1 0x0 PRP2 0x0 00:33:10.637 [2024-07-13 22:16:18.773837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.637 [2024-07-13 22:16:18.773878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.637 [2024-07-13 22:16:18.773897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.637 [2024-07-13 22:16:18.773921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119856 len:8 PRP1 0x0 PRP2 0x0 00:33:10.637 [2024-07-13 22:16:18.773940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.637 [2024-07-13 22:16:18.773959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.637 [2024-07-13 22:16:18.773975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.637 [2024-07-13 22:16:18.773992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119864 len:8 PRP1 0x0 PRP2 0x0 00:33:10.637 [2024-07-13 22:16:18.774011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.637 [2024-07-13 22:16:18.774033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.637 [2024-07-13 22:16:18.774051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.637 [2024-07-13 22:16:18.774068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119872 len:8 PRP1 0x0 PRP2 0x0 00:33:10.637 [2024-07-13 22:16:18.774087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.637 [2024-07-13 22:16:18.774107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.637 [2024-07-13 22:16:18.774123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.637 [2024-07-13 22:16:18.774141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119880 len:8 PRP1 0x0 PRP2 0x0 00:33:10.637 [2024-07-13 22:16:18.774159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.637 [2024-07-13 22:16:18.774194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.637 [2024-07-13 22:16:18.774209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.637 [2024-07-13 22:16:18.774225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119888 len:8 PRP1 0x0 PRP2 0x0 00:33:10.637 [2024-07-13 22:16:18.774243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.637 [2024-07-13 22:16:18.774262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.637 [2024-07-13 22:16:18.774278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.637 [2024-07-13 22:16:18.774294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119896 len:8 PRP1 0x0 PRP2 0x0 00:33:10.637 [2024-07-13 22:16:18.774312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.637 [2024-07-13 22:16:18.774330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.637 [2024-07-13 22:16:18.774346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.637 [2024-07-13 22:16:18.774362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120544 len:8 PRP1 0x0 PRP2 0x0 00:33:10.637 [2024-07-13 22:16:18.774379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.637 [2024-07-13 22:16:18.774397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.637 [2024-07-13 22:16:18.774413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.637 [2024-07-13 22:16:18.774429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120552 len:8 PRP1 0x0 PRP2 0x0 00:33:10.637 [2024-07-13 22:16:18.774447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.637 [2024-07-13 22:16:18.774466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.637 [2024-07-13 22:16:18.774481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.637 [2024-07-13 22:16:18.774497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120560 len:8 PRP1 0x0 PRP2 0x0 00:33:10.637 [2024-07-13 22:16:18.774515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.637 [2024-07-13 22:16:18.774533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.637 [2024-07-13 22:16:18.774548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.637 [2024-07-13 22:16:18.774564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120568 len:8 PRP1 0x0 PRP2 0x0 00:33:10.637 [2024-07-13 22:16:18.774586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.637 [2024-07-13 22:16:18.774605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.637 [2024-07-13 22:16:18.774621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.637 [2024-07-13 22:16:18.774637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120576 len:8 PRP1 0x0 PRP2 0x0 00:33:10.637 [2024-07-13 22:16:18.774655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.637 [2024-07-13 22:16:18.774673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.637 [2024-07-13 22:16:18.774688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.637 [2024-07-13 22:16:18.774749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120584 len:8 PRP1 0x0 PRP2 0x0 00:33:10.637 [2024-07-13 22:16:18.774769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.637 [2024-07-13 22:16:18.774789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.637 [2024-07-13 22:16:18.774805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.637 [2024-07-13 22:16:18.774822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120592 len:8 PRP1 0x0 PRP2 0x0 00:33:10.637 [2024-07-13 22:16:18.774840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.637 [2024-07-13 22:16:18.774882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.637 [2024-07-13 22:16:18.774926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.637 [2024-07-13 22:16:18.774945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120600 len:8 PRP1 0x0 PRP2 0x0 00:33:10.637 [2024-07-13 22:16:18.774964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.637 [2024-07-13 22:16:18.774984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.637 [2024-07-13 22:16:18.775000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.637 [2024-07-13 22:16:18.775017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120608 len:8 PRP1 0x0 PRP2 0x0 00:33:10.637 [2024-07-13 22:16:18.775035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.637 [2024-07-13 22:16:18.775054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.637 [2024-07-13 22:16:18.775071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.637 [2024-07-13 22:16:18.775088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120616 len:8 PRP1 0x0 PRP2 0x0 00:33:10.637 [2024-07-13 22:16:18.775106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.637 [2024-07-13 22:16:18.775126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.637 [2024-07-13 22:16:18.775157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.637 [2024-07-13 22:16:18.775185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120624 len:8 PRP1 0x0 PRP2 0x0 00:33:10.637 [2024-07-13 22:16:18.775204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.637 [2024-07-13 22:16:18.775237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.637 [2024-07-13 22:16:18.775253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.637 [2024-07-13 22:16:18.775273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120632 len:8 PRP1 0x0 PRP2 0x0 00:33:10.637 [2024-07-13 22:16:18.775292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.637 [2024-07-13 22:16:18.775311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.637 [2024-07-13 22:16:18.775326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.637 [2024-07-13 22:16:18.775343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120640 len:8 PRP1 0x0 PRP2 0x0 00:33:10.637 [2024-07-13 22:16:18.775361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.637 [2024-07-13 22:16:18.775379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.637 [2024-07-13 22:16:18.775394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.637 [2024-07-13 22:16:18.775411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120648 len:8 PRP1 0x0 PRP2 0x0 00:33:10.637 [2024-07-13 22:16:18.775428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.637 [2024-07-13 22:16:18.775446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.637 [2024-07-13 22:16:18.775462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.637 [2024-07-13 22:16:18.775479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120656 len:8 PRP1 0x0 PRP2 0x0 00:33:10.637 [2024-07-13 22:16:18.775497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.637 [2024-07-13 22:16:18.775515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.637 [2024-07-13 22:16:18.775531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.637 [2024-07-13 22:16:18.775547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120664 len:8 PRP1 0x0 PRP2 0x0 00:33:10.637 [2024-07-13 22:16:18.775565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.637 [2024-07-13 22:16:18.775583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.637 [2024-07-13 22:16:18.775598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.637 [2024-07-13 22:16:18.775614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120672 len:8 PRP1 0x0 PRP2 0x0 00:33:10.637 [2024-07-13 22:16:18.775632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:18.775650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.638 [2024-07-13 22:16:18.775665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.638 [2024-07-13 22:16:18.775681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120680 len:8 PRP1 0x0 PRP2 0x0 00:33:10.638 [2024-07-13 22:16:18.775699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:18.775716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.638 [2024-07-13 22:16:18.775732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.638 [2024-07-13 22:16:18.775748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120688 len:8 PRP1 0x0 PRP2 0x0 00:33:10.638 [2024-07-13 22:16:18.775766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:18.775784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.638 [2024-07-13 22:16:18.775803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.638 [2024-07-13 22:16:18.775820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120696 len:8 PRP1 0x0 PRP2 0x0 00:33:10.638 [2024-07-13 22:16:18.775838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:18.776194] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f3180 was disconnected and freed. reset controller. 00:33:10.638 [2024-07-13 22:16:18.776239] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:33:10.638 [2024-07-13 22:16:18.776260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.638 [2024-07-13 22:16:18.776338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:10.638 [2024-07-13 22:16:18.780325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.638 [2024-07-13 22:16:18.921515] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:10.638 [2024-07-13 22:16:23.258458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.638 [2024-07-13 22:16:23.258535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.258596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.638 [2024-07-13 22:16:23.258619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.258643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.638 [2024-07-13 22:16:23.258665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.258688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.638 [2024-07-13 22:16:23.258709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.258731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.638 [2024-07-13 22:16:23.258751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.258773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.638 [2024-07-13 22:16:23.258794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.258816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.638 [2024-07-13 22:16:23.258837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.258858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.638 [2024-07-13 22:16:23.258905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.258932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.638 [2024-07-13 22:16:23.258954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.258994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.638 [2024-07-13 22:16:23.259016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.259039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.638 [2024-07-13 22:16:23.259061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.259085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.638 [2024-07-13 22:16:23.259107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.259130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.638 [2024-07-13 22:16:23.259153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.259191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.638 [2024-07-13 22:16:23.259213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.259235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.638 [2024-07-13 22:16:23.259256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.259278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.638 [2024-07-13 22:16:23.259298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.259319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.638 [2024-07-13 22:16:23.259341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.259365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.638 [2024-07-13 22:16:23.259385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.259408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.638 [2024-07-13 22:16:23.259428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.259467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.638 [2024-07-13 22:16:23.259488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.259511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.638 [2024-07-13 22:16:23.259532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.259554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.638 [2024-07-13 22:16:23.259579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.259604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.638 [2024-07-13 22:16:23.259624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.259647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.638 [2024-07-13 22:16:23.259667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.259690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.638 [2024-07-13 22:16:23.259712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.259735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.638 [2024-07-13 22:16:23.259756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.259780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.638 [2024-07-13 22:16:23.259802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.259824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.638 [2024-07-13 22:16:23.259845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.259920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.638 [2024-07-13 22:16:23.259947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.259972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.638 [2024-07-13 22:16:23.259994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.260018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.638 [2024-07-13 22:16:23.260041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.260064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.638 [2024-07-13 22:16:23.260085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.260130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.638 [2024-07-13 22:16:23.260152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.260177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.638 [2024-07-13 22:16:23.260214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.638 [2024-07-13 22:16:23.260242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.638 [2024-07-13 22:16:23.260264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.260288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.639 [2024-07-13 22:16:23.260309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.260331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.260353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.260377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.260399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.260422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.260443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.260466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.260486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.260509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.260530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.260553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.260573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.260595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.260616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.260639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.260659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.260682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.260703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.260725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.260747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.260769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.260793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.260817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.260838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.260886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.260910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.260934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.260957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.260980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.261002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.261027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.261049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.261071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.261093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.261117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.261139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.261178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.261200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.261223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.261243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.261266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.261288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.261310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.261331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.261354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.261376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.261402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.261424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.261446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.261467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.261489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.261511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.261533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.261554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.261577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.261598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.261620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.261641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.261664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.261685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.261708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.261730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.261753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.261774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.261796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.261817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.261840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.261860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.261907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.261930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.639 [2024-07-13 22:16:23.261953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.639 [2024-07-13 22:16:23.261975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.262003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.262025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.262048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:113864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.262070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.262093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.262115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.262138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.262160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.262198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.262220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.262243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.262264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.262286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.262308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.262330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.262351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.262374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.262395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.262418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.262440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.262463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.262484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.262507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.262528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.262550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.262576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.262600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.262621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.262644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.262665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.262687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.262709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.262732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.262753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.262775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.262797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.262820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.262841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.262864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.262911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.262936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.262969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.262993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.263015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.263039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.263060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.263083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.263105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.263143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.263165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.263216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.263239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.263263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.263284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.263307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.263328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.263351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.263372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.263395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.263416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.263439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.263460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.263482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.263503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.263527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.263547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.263570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.263591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.263615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.263636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.263659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.263680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.263701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.263723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.263746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.263770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.263794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.263815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.263837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.263858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.263907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.263929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.263959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.263982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.640 [2024-07-13 22:16:23.264005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.640 [2024-07-13 22:16:23.264027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.641 [2024-07-13 22:16:23.264050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:10.641 [2024-07-13 22:16:23.264072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.641 [2024-07-13 22:16:23.264126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.641 [2024-07-13 22:16:23.264161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114208 len:8 PRP1 0x0 PRP2 0x0 00:33:10.641 [2024-07-13 22:16:23.264198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.641 [2024-07-13 22:16:23.264226] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.641 [2024-07-13 22:16:23.264245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.641 [2024-07-13 22:16:23.264264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114216 len:8 PRP1 0x0 PRP2 0x0 00:33:10.641 [2024-07-13 22:16:23.264282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.641 [2024-07-13 22:16:23.264301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.641 [2024-07-13 22:16:23.264318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.641 [2024-07-13 22:16:23.264335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114224 len:8 PRP1 0x0 PRP2 0x0 00:33:10.641 [2024-07-13 22:16:23.264353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.641 [2024-07-13 22:16:23.264372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.641 [2024-07-13 22:16:23.264388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.641 [2024-07-13 22:16:23.264405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114232 len:8 PRP1 0x0 PRP2 0x0 00:33:10.641 [2024-07-13 22:16:23.264423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.641 [2024-07-13 22:16:23.264447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.641 [2024-07-13 22:16:23.264464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.641 [2024-07-13 22:16:23.264482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114240 len:8 PRP1 0x0 PRP2 0x0 00:33:10.641 [2024-07-13 22:16:23.264500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.641 [2024-07-13 22:16:23.264518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.641 [2024-07-13 22:16:23.264535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.641 [2024-07-13 22:16:23.264551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114248 len:8 PRP1 0x0 PRP2 0x0 00:33:10.641 [2024-07-13 22:16:23.264570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.641 [2024-07-13 22:16:23.264588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.641 [2024-07-13 22:16:23.264604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.641 [2024-07-13 22:16:23.264621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114256 len:8 PRP1 0x0 PRP2 0x0 00:33:10.641 [2024-07-13 22:16:23.264645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.641 [2024-07-13 22:16:23.264664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.641 [2024-07-13 22:16:23.264680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.641 [2024-07-13 22:16:23.264697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114264 len:8 PRP1 0x0 PRP2 0x0 00:33:10.641 [2024-07-13 22:16:23.264715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.641 [2024-07-13 22:16:23.264733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.641 [2024-07-13 22:16:23.264749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.641 [2024-07-13 22:16:23.264766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114272 len:8 PRP1 0x0 PRP2 0x0 00:33:10.641 [2024-07-13 22:16:23.264784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.641 [2024-07-13 22:16:23.264802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.641 [2024-07-13 22:16:23.264820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.641 [2024-07-13 22:16:23.264838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114280 len:8 PRP1 0x0 PRP2 0x0 00:33:10.641 [2024-07-13 22:16:23.264856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.641 [2024-07-13 22:16:23.264898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.641 [2024-07-13 22:16:23.264917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.641 [2024-07-13 22:16:23.264935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114288 len:8 PRP1 0x0 PRP2 0x0 00:33:10.641 [2024-07-13 22:16:23.264956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.641 [2024-07-13 22:16:23.264975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:10.641 [2024-07-13 22:16:23.264993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:10.641 [2024-07-13 22:16:23.265010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114296 len:8 PRP1 0x0 PRP2 0x0 00:33:10.641 [2024-07-13 22:16:23.265033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.641 [2024-07-13 22:16:23.265325] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f3900 was disconnected and freed. reset controller. 00:33:10.641 [2024-07-13 22:16:23.265355] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:33:10.641 [2024-07-13 22:16:23.265421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.641 [2024-07-13 22:16:23.265449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.641 [2024-07-13 22:16:23.265474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.641 [2024-07-13 22:16:23.265495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.641 [2024-07-13 22:16:23.265516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.641 [2024-07-13 22:16:23.265535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.641 [2024-07-13 22:16:23.265557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:10.641 [2024-07-13 22:16:23.265577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:10.641 [2024-07-13 22:16:23.265596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:10.641 [2024-07-13 22:16:23.265686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:10.641 [2024-07-13 22:16:23.269553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:10.641 [2024-07-13 22:16:23.358143] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:10.641 00:33:10.641 Latency(us) 00:33:10.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.641 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:10.641 Verification LBA range: start 0x0 length 0x4000 00:33:10.641 NVMe0n1 : 15.01 6151.86 24.03 572.94 0.00 18999.79 1110.47 35923.44 00:33:10.641 =================================================================================================================== 00:33:10.641 Total : 6151.86 24.03 572.94 0.00 18999.79 1110.47 35923.44 00:33:10.641 Received shutdown signal, test time was about 15.000000 seconds 00:33:10.641 00:33:10.641 Latency(us) 00:33:10.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.641 =================================================================================================================== 00:33:10.641 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:10.641 22:16:29 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:10.641 22:16:29 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:33:10.641 22:16:29 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:33:10.641 22:16:29 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=9552 00:33:10.641 22:16:29 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:10.641 22:16:29 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 9552 /var/tmp/bdevperf.sock 00:33:10.641 22:16:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 9552 ']' 00:33:10.641 22:16:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:10.641 22:16:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:10.641 22:16:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:10.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:10.641 22:16:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:10.641 22:16:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:11.576 22:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:11.576 22:16:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:33:11.576 22:16:30 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:11.834 [2024-07-13 22:16:31.162471] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:11.835 22:16:31 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:33:12.093 [2024-07-13 22:16:31.399184] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:33:12.093 22:16:31 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:12.659 NVMe0n1 00:33:12.659 22:16:31 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:13.226 00:33:13.226 22:16:32 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:13.485 00:33:13.485 22:16:32 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:13.485 22:16:32 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:33:13.743 22:16:32 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:13.743 22:16:33 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:33:17.034 22:16:36 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:17.034 22:16:36 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:33:17.034 22:16:36 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=10350 00:33:17.034 22:16:36 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:17.034 22:16:36 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 10350 00:33:18.453 0 00:33:18.453 22:16:37 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:18.453 [2024-07-13 22:16:30.021589] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:18.453 [2024-07-13 22:16:30.021750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid9552 ] 00:33:18.453 EAL: No free 2048 kB hugepages reported on node 1 00:33:18.453 [2024-07-13 22:16:30.149792] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.453 [2024-07-13 22:16:30.382052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.453 [2024-07-13 22:16:33.113664] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:18.453 [2024-07-13 22:16:33.113800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.453 [2024-07-13 22:16:33.113849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.453 [2024-07-13 22:16:33.113886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.453 [2024-07-13 22:16:33.113910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.453 [2024-07-13 22:16:33.113931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.453 [2024-07-13 22:16:33.113953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.453 [2024-07-13 22:16:33.113974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:18.453 [2024-07-13 22:16:33.113995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:18.453 [2024-07-13 22:16:33.114015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:18.453 [2024-07-13 22:16:33.114103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:18.453 [2024-07-13 22:16:33.114153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:18.453 [2024-07-13 22:16:33.121042] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:18.453 Running I/O for 1 seconds... 00:33:18.453 00:33:18.453 Latency(us) 00:33:18.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.453 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:18.453 Verification LBA range: start 0x0 length 0x4000 00:33:18.453 NVMe0n1 : 1.01 6286.76 24.56 0.00 0.00 20264.35 2961.26 20000.62 00:33:18.453 =================================================================================================================== 00:33:18.453 Total : 6286.76 24.56 0.00 0.00 20264.35 2961.26 20000.62 00:33:18.453 22:16:37 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:18.453 22:16:37 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:18.453 22:16:37 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:18.711 22:16:38 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:18.711 22:16:38 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:18.969 22:16:38 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:19.227 22:16:38 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:33:22.506 22:16:41 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:22.506 22:16:41 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:33:22.506 22:16:41 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 9552 00:33:22.506 22:16:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 9552 ']' 00:33:22.506 22:16:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 9552 00:33:22.506 22:16:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:33:22.506 22:16:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:22.506 22:16:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 9552 00:33:22.506 22:16:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:22.506 22:16:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:22.506 22:16:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 9552' 00:33:22.506 killing process with pid 9552 00:33:22.507 22:16:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 9552 00:33:22.507 22:16:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 9552 00:33:23.881 22:16:42 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:33:23.882 22:16:42 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:23.882 22:16:43 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:23.882 22:16:43 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:23.882 22:16:43 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:33:23.882 22:16:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:23.882 22:16:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:33:23.882 22:16:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:23.882 22:16:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:33:23.882 22:16:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:23.882 22:16:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:23.882 rmmod nvme_tcp 00:33:23.882 rmmod nvme_fabrics 00:33:23.882 rmmod nvme_keyring 00:33:23.882 22:16:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:23.882 22:16:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:33:23.882 22:16:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:33:23.882 22:16:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 7159 ']' 00:33:23.882 22:16:43 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 7159 00:33:23.882 22:16:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 7159 ']' 00:33:23.882 22:16:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 7159 00:33:23.882 22:16:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:33:23.882 22:16:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:23.882 22:16:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 7159 00:33:23.882 22:16:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:23.882 22:16:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:23.882 22:16:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 7159' 00:33:23.882 killing process with pid 7159 00:33:23.882 22:16:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 7159 00:33:23.882 22:16:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 7159 00:33:25.255 22:16:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:25.255 22:16:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:25.255 22:16:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:25.255 22:16:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:25.255 22:16:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:25.255 22:16:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.255 22:16:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:25.255 22:16:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.786 22:16:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:27.786 00:33:27.786 real 0m39.424s 00:33:27.786 user 2m16.348s 00:33:27.786 sys 0m6.851s 00:33:27.786 22:16:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:27.786 22:16:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:27.786 ************************************ 00:33:27.786 END TEST nvmf_failover 00:33:27.786 ************************************ 00:33:27.786 22:16:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:27.786 22:16:46 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:27.786 22:16:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:27.786 22:16:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:27.786 22:16:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:27.786 ************************************ 00:33:27.786 START TEST nvmf_host_discovery 00:33:27.786 ************************************ 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:27.786 * Looking for test storage... 00:33:27.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:27.786 22:16:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:33:27.787 22:16:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:29.687 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:29.687 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:33:29.687 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:29.687 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:29.688 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:29.688 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:29.688 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:29.688 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:29.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:29.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:33:29.688 00:33:29.688 --- 10.0.0.2 ping statistics --- 00:33:29.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:29.688 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:29.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:29.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:33:29.688 00:33:29.688 --- 10.0.0.1 ping statistics --- 00:33:29.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:29.688 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=13209 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 13209 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 13209 ']' 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:29.688 22:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:29.689 22:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:29.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:29.689 22:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:29.689 22:16:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:29.689 [2024-07-13 22:16:48.954263] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:29.689 [2024-07-13 22:16:48.954404] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:29.689 EAL: No free 2048 kB hugepages reported on node 1 00:33:29.947 [2024-07-13 22:16:49.095223] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.205 [2024-07-13 22:16:49.356955] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:30.205 [2024-07-13 22:16:49.357020] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:30.205 [2024-07-13 22:16:49.357048] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:30.205 [2024-07-13 22:16:49.357074] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:30.205 [2024-07-13 22:16:49.357098] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:30.205 [2024-07-13 22:16:49.357159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:30.768 22:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:30.768 22:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:33:30.768 22:16:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:30.768 22:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:30.768 22:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:30.768 22:16:49 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:30.768 22:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:30.768 22:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.768 22:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:30.769 [2024-07-13 22:16:49.933078] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:30.769 22:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.769 22:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:30.769 22:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.769 22:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:30.769 [2024-07-13 22:16:49.941285] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:30.769 22:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.769 22:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:30.769 22:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.769 22:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:30.769 null0 00:33:30.769 22:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.769 22:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:30.769 22:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.769 22:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:30.769 null1 00:33:30.769 22:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.769 22:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:30.769 22:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.769 22:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:30.769 22:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.769 22:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=13360 00:33:30.769 22:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 13360 /tmp/host.sock 00:33:30.769 22:16:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:30.769 22:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 13360 ']' 00:33:30.769 22:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:33:30.769 22:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:30.769 22:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:30.769 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:30.769 22:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:30.769 22:16:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:30.769 [2024-07-13 22:16:50.060535] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:30.769 [2024-07-13 22:16:50.060689] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid13360 ] 00:33:30.769 EAL: No free 2048 kB hugepages reported on node 1 00:33:31.026 [2024-07-13 22:16:50.190605] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.283 [2024-07-13 22:16:50.427298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:31.848 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:32.105 [2024-07-13 22:16:51.337257] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:32.105 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.363 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:33:32.363 22:16:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:33:32.926 [2024-07-13 22:16:52.096092] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:32.926 [2024-07-13 22:16:52.096134] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:32.926 [2024-07-13 22:16:52.096183] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:32.926 [2024-07-13 22:16:52.182510] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:32.926 [2024-07-13 22:16:52.286517] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:32.926 [2024-07-13 22:16:52.286555] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:33.184 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:33.184 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:33.184 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:33.184 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:33.184 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:33.184 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.184 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.184 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:33.184 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:33.184 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.184 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:33.184 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:33.184 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:33.184 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:33.184 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:33.184 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:33.184 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:33:33.184 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:33.184 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:33.184 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.184 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.184 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:33.184 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:33.184 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:33.184 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.444 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.703 [2024-07-13 22:16:52.838082] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:33.703 [2024-07-13 22:16:52.838466] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:33.703 [2024-07-13 22:16:52.838523] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.703 [2024-07-13 22:16:52.925377] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:33:33.703 22:16:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:33:33.959 [2024-07-13 22:16:53.232176] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:33.959 [2024-07-13 22:16:53.232218] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:33.959 [2024-07-13 22:16:53.232238] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:34.893 22:16:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:34.893 22:16:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:34.893 22:16:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:33:34.893 22:16:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:34.893 22:16:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:34.893 22:16:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.893 22:16:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.893 22:16:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:34.893 22:16:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:34.893 22:16:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.893 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:34.893 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:34.893 22:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:34.893 22:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:34.893 22:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:34.893 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:34.893 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:34.893 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.894 [2024-07-13 22:16:54.063063] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:34.894 [2024-07-13 22:16:54.063127] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.894 [2024-07-13 22:16:54.066295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:34.894 [2024-07-13 22:16:54.066356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.894 [2024-07-13 22:16:54.066383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.894 [2024-07-13 22:16:54.066405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.894 [2024-07-13 22:16:54.066426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.894 [2024-07-13 22:16:54.066446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.894 [2024-07-13 22:16:54.066467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:34.894 [2024-07-13 22:16:54.066487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:34.894 [2024-07-13 22:16:54.066507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:34.894 [2024-07-13 22:16:54.076279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.894 [2024-07-13 22:16:54.086323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:34.894 [2024-07-13 22:16:54.086646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.894 [2024-07-13 22:16:54.086686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:34.894 [2024-07-13 22:16:54.086711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:34.894 [2024-07-13 22:16:54.086745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:34.894 [2024-07-13 22:16:54.086777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:34.894 [2024-07-13 22:16:54.086799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:34.894 [2024-07-13 22:16:54.086822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:34.894 [2024-07-13 22:16:54.086860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.894 [2024-07-13 22:16:54.096446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:34.894 [2024-07-13 22:16:54.096783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.894 [2024-07-13 22:16:54.096820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:34.894 [2024-07-13 22:16:54.096843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:34.894 [2024-07-13 22:16:54.096883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:34.894 [2024-07-13 22:16:54.096915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:34.894 [2024-07-13 22:16:54.096936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:34.894 [2024-07-13 22:16:54.096954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:34.894 [2024-07-13 22:16:54.096983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.894 [2024-07-13 22:16:54.106575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:34.894 [2024-07-13 22:16:54.106881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.894 [2024-07-13 22:16:54.106921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:34.894 [2024-07-13 22:16:54.106944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:34.894 [2024-07-13 22:16:54.106976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:34.894 [2024-07-13 22:16:54.107007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:34.894 [2024-07-13 22:16:54.107027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:34.894 [2024-07-13 22:16:54.107046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:34.894 [2024-07-13 22:16:54.107074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:34.894 [2024-07-13 22:16:54.116711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:34.894 [2024-07-13 22:16:54.117007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.894 [2024-07-13 22:16:54.117046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:34.894 [2024-07-13 22:16:54.117070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:34.894 [2024-07-13 22:16:54.117103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:34.894 [2024-07-13 22:16:54.117133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:34.894 [2024-07-13 22:16:54.117164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:34.894 [2024-07-13 22:16:54.117182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:34.894 [2024-07-13 22:16:54.117237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.894 [2024-07-13 22:16:54.126830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:34.894 [2024-07-13 22:16:54.127068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.894 [2024-07-13 22:16:54.127105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:34.894 [2024-07-13 22:16:54.127129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:34.894 [2024-07-13 22:16:54.127164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:34.894 [2024-07-13 22:16:54.127194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:34.894 [2024-07-13 22:16:54.127215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:34.894 [2024-07-13 22:16:54.127233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:34.894 [2024-07-13 22:16:54.127261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.894 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.894 [2024-07-13 22:16:54.136954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:34.894 [2024-07-13 22:16:54.137182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.894 [2024-07-13 22:16:54.137218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:34.894 [2024-07-13 22:16:54.137247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:34.895 [2024-07-13 22:16:54.137279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:34.895 [2024-07-13 22:16:54.137309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:34.895 [2024-07-13 22:16:54.137330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:34.895 [2024-07-13 22:16:54.137348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:34.895 [2024-07-13 22:16:54.137407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.895 [2024-07-13 22:16:54.147045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:34.895 [2024-07-13 22:16:54.147264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.895 [2024-07-13 22:16:54.147300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:34.895 [2024-07-13 22:16:54.147322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:34.895 [2024-07-13 22:16:54.147354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:34.895 [2024-07-13 22:16:54.147383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:34.895 [2024-07-13 22:16:54.147404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:34.895 [2024-07-13 22:16:54.147422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:34.895 [2024-07-13 22:16:54.147450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.895 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:34.895 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:34.895 22:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:34.895 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:34.895 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:34.895 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:34.895 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:34.895 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:33:34.895 22:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:34.895 22:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:34.895 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:34.895 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:34.895 22:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:34.895 22:16:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:34.895 [2024-07-13 22:16:54.157172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:34.895 [2024-07-13 22:16:54.157472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.895 [2024-07-13 22:16:54.157520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:34.895 [2024-07-13 22:16:54.157548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:34.895 [2024-07-13 22:16:54.157581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:34.895 [2024-07-13 22:16:54.157936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:34.895 [2024-07-13 22:16:54.157967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:34.895 [2024-07-13 22:16:54.157988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:34.895 [2024-07-13 22:16:54.158017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.895 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:34.895 [2024-07-13 22:16:54.167282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:34.895 [2024-07-13 22:16:54.167532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.895 [2024-07-13 22:16:54.167577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:34.895 [2024-07-13 22:16:54.167612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:34.895 [2024-07-13 22:16:54.167644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:34.895 [2024-07-13 22:16:54.167675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:34.895 [2024-07-13 22:16:54.167695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:34.895 [2024-07-13 22:16:54.167729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:34.895 [2024-07-13 22:16:54.167774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.895 [2024-07-13 22:16:54.177394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:34.895 [2024-07-13 22:16:54.177659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.895 [2024-07-13 22:16:54.177695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:34.895 [2024-07-13 22:16:54.177718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:34.895 [2024-07-13 22:16:54.177749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:34.895 [2024-07-13 22:16:54.177809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:34.895 [2024-07-13 22:16:54.177836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:34.895 [2024-07-13 22:16:54.177872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:34.895 [2024-07-13 22:16:54.177904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.895 [2024-07-13 22:16:54.187497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:34.895 [2024-07-13 22:16:54.187758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:34.895 [2024-07-13 22:16:54.187794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2280 with addr=10.0.0.2, port=4420 00:33:34.895 [2024-07-13 22:16:54.187826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:33:34.895 [2024-07-13 22:16:54.187858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:33:34.895 [2024-07-13 22:16:54.187933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:34.895 [2024-07-13 22:16:54.187959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:34.895 [2024-07-13 22:16:54.187978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:34.895 [2024-07-13 22:16:54.188005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:34.895 [2024-07-13 22:16:54.190970] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:34.895 [2024-07-13 22:16:54.191013] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:34.895 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:33:34.895 22:16:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:33:35.831 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:35.831 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:35.831 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:33:35.831 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:35.831 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.831 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:35.831 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.831 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:35.831 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:35.831 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:36.090 22:16:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:37.464 [2024-07-13 22:16:56.477078] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:37.465 [2024-07-13 22:16:56.477140] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:37.465 [2024-07-13 22:16:56.477199] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:37.465 [2024-07-13 22:16:56.563493] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:37.724 [2024-07-13 22:16:56.875539] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:37.724 [2024-07-13 22:16:56.875638] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:37.724 request: 00:33:37.724 { 00:33:37.724 "name": "nvme", 00:33:37.724 "trtype": "tcp", 00:33:37.724 "traddr": "10.0.0.2", 00:33:37.724 "adrfam": "ipv4", 00:33:37.724 "trsvcid": "8009", 00:33:37.724 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:37.724 "wait_for_attach": true, 00:33:37.724 "method": "bdev_nvme_start_discovery", 00:33:37.724 "req_id": 1 00:33:37.724 } 00:33:37.724 Got JSON-RPC error response 00:33:37.724 response: 00:33:37.724 { 00:33:37.724 "code": -17, 00:33:37.724 "message": "File exists" 00:33:37.724 } 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:37.724 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.725 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:37.725 request: 00:33:37.725 { 00:33:37.725 "name": "nvme_second", 00:33:37.725 "trtype": "tcp", 00:33:37.725 "traddr": "10.0.0.2", 00:33:37.725 "adrfam": "ipv4", 00:33:37.725 "trsvcid": "8009", 00:33:37.725 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:37.725 "wait_for_attach": true, 00:33:37.725 "method": "bdev_nvme_start_discovery", 00:33:37.725 "req_id": 1 00:33:37.725 } 00:33:37.725 Got JSON-RPC error response 00:33:37.725 response: 00:33:37.725 { 00:33:37.725 "code": -17, 00:33:37.725 "message": "File exists" 00:33:37.725 } 00:33:37.725 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:37.725 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:33:37.725 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:37.725 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:37.725 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:37.725 22:16:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:37.725 22:16:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:37.725 22:16:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:37.725 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.725 22:16:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:37.725 22:16:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:37.725 22:16:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:37.725 22:16:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.725 22:16:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:37.725 22:16:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:37.725 22:16:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:37.725 22:16:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.725 22:16:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:37.725 22:16:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:37.725 22:16:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:37.725 22:16:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:37.725 22:16:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.725 22:16:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:37.725 22:16:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:37.725 22:16:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:33:37.725 22:16:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:37.725 22:16:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:37.725 22:16:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:37.725 22:16:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:37.725 22:16:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:37.725 22:16:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:37.725 22:16:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.725 22:16:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:39.099 [2024-07-13 22:16:58.095475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.099 [2024-07-13 22:16:58.095562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3400 with addr=10.0.0.2, port=8010 00:33:39.099 [2024-07-13 22:16:58.095647] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:39.099 [2024-07-13 22:16:58.095674] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:39.099 [2024-07-13 22:16:58.095698] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:40.032 [2024-07-13 22:16:59.098027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.032 [2024-07-13 22:16:59.098114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f3680 with addr=10.0.0.2, port=8010 00:33:40.032 [2024-07-13 22:16:59.098208] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:40.032 [2024-07-13 22:16:59.098232] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:40.032 [2024-07-13 22:16:59.098253] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:40.965 [2024-07-13 22:17:00.099954] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:40.965 request: 00:33:40.965 { 00:33:40.965 "name": "nvme_second", 00:33:40.965 "trtype": "tcp", 00:33:40.965 "traddr": "10.0.0.2", 00:33:40.965 "adrfam": "ipv4", 00:33:40.965 "trsvcid": "8010", 00:33:40.965 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:40.965 "wait_for_attach": false, 00:33:40.965 "attach_timeout_ms": 3000, 00:33:40.965 "method": "bdev_nvme_start_discovery", 00:33:40.965 "req_id": 1 00:33:40.965 } 00:33:40.965 Got JSON-RPC error response 00:33:40.965 response: 00:33:40.965 { 00:33:40.965 "code": -110, 00:33:40.965 "message": "Connection timed out" 00:33:40.965 } 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 13360 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:40.965 rmmod nvme_tcp 00:33:40.965 rmmod nvme_fabrics 00:33:40.965 rmmod nvme_keyring 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 13209 ']' 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 13209 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 13209 ']' 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 13209 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 13209 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 13209' 00:33:40.965 killing process with pid 13209 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 13209 00:33:40.965 22:17:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 13209 00:33:42.337 22:17:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:42.337 22:17:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:42.337 22:17:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:42.337 22:17:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:42.337 22:17:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:42.337 22:17:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:42.337 22:17:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:42.337 22:17:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:44.237 00:33:44.237 real 0m16.831s 00:33:44.237 user 0m25.672s 00:33:44.237 sys 0m3.199s 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:44.237 ************************************ 00:33:44.237 END TEST nvmf_host_discovery 00:33:44.237 ************************************ 00:33:44.237 22:17:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:44.237 22:17:03 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:44.237 22:17:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:44.237 22:17:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:44.237 22:17:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:44.237 ************************************ 00:33:44.237 START TEST nvmf_host_multipath_status 00:33:44.237 ************************************ 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:44.237 * Looking for test storage... 00:33:44.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:44.237 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:44.238 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:44.238 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:44.238 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:44.238 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:33:44.238 22:17:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:46.768 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:46.768 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:46.769 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:46.769 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:46.769 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:46.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:46.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:33:46.769 00:33:46.769 --- 10.0.0.2 ping statistics --- 00:33:46.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.769 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:46.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:46.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:33:46.769 00:33:46.769 --- 10.0.0.1 ping statistics --- 00:33:46.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.769 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=16749 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 16749 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 16749 ']' 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:46.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:46.769 22:17:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:46.769 [2024-07-13 22:17:05.789568] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:46.769 [2024-07-13 22:17:05.789710] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:46.769 EAL: No free 2048 kB hugepages reported on node 1 00:33:46.769 [2024-07-13 22:17:05.920350] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:46.769 [2024-07-13 22:17:06.149198] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:46.769 [2024-07-13 22:17:06.149273] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:46.769 [2024-07-13 22:17:06.149318] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:46.769 [2024-07-13 22:17:06.149336] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:46.769 [2024-07-13 22:17:06.149353] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:46.769 [2024-07-13 22:17:06.149468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:46.769 [2024-07-13 22:17:06.149476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:47.702 22:17:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:47.702 22:17:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:33:47.702 22:17:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:47.702 22:17:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:47.702 22:17:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:47.702 22:17:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:47.702 22:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=16749 00:33:47.702 22:17:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:47.702 [2024-07-13 22:17:06.993410] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:47.702 22:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:47.960 Malloc0 00:33:47.960 22:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:48.218 22:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:48.476 22:17:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:48.734 [2024-07-13 22:17:08.084473] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:48.734 22:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:48.992 [2024-07-13 22:17:08.321061] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:48.992 22:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=17073 00:33:48.992 22:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:48.992 22:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:48.992 22:17:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 17073 /var/tmp/bdevperf.sock 00:33:48.992 22:17:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 17073 ']' 00:33:48.992 22:17:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:48.992 22:17:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:48.992 22:17:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:48.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:48.992 22:17:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:48.992 22:17:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:50.364 22:17:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:50.364 22:17:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:33:50.364 22:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:50.364 22:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:33:50.622 Nvme0n1 00:33:50.622 22:17:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:51.188 Nvme0n1 00:33:51.188 22:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:51.188 22:17:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:53.119 22:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:53.119 22:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:53.377 22:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:53.635 22:17:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:54.568 22:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:54.568 22:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:54.568 22:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.568 22:17:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:54.826 22:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:54.826 22:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:54.826 22:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.826 22:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:55.083 22:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:55.083 22:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:55.083 22:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.083 22:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:55.340 22:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:55.341 22:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:55.341 22:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.341 22:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:55.598 22:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:55.598 22:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:55.598 22:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.598 22:17:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:55.855 22:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:55.855 22:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:55.855 22:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.855 22:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:56.113 22:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:56.113 22:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:56.113 22:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:56.372 22:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:56.630 22:17:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:57.562 22:17:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:57.562 22:17:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:57.562 22:17:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:57.562 22:17:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:57.820 22:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:57.820 22:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:57.820 22:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:57.820 22:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:58.078 22:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:58.078 22:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:58.078 22:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.078 22:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:58.337 22:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:58.337 22:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:58.337 22:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.337 22:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:58.596 22:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:58.596 22:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:58.596 22:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.596 22:17:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:58.855 22:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:58.855 22:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:58.855 22:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.855 22:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:59.114 22:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.114 22:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:59.114 22:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:59.372 22:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:59.631 22:17:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:34:00.565 22:17:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:34:00.565 22:17:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:00.565 22:17:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:00.565 22:17:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:00.823 22:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:00.823 22:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:00.823 22:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:00.823 22:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:01.082 22:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:01.082 22:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:01.082 22:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.082 22:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:01.340 22:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:01.340 22:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:01.340 22:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.340 22:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:01.599 22:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:01.599 22:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:01.599 22:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.599 22:17:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:01.857 22:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:01.857 22:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:01.857 22:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.857 22:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:02.116 22:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.116 22:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:34:02.116 22:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:02.374 22:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:02.633 22:17:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:34:03.566 22:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:34:03.566 22:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:03.567 22:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:03.567 22:17:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:03.825 22:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:03.825 22:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:03.825 22:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:03.825 22:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:04.081 22:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:04.081 22:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:04.081 22:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:04.081 22:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:04.339 22:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:04.339 22:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:04.339 22:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:04.339 22:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:04.596 22:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:04.597 22:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:04.597 22:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:04.597 22:17:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:04.864 22:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:04.864 22:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:04.864 22:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:04.864 22:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:05.146 22:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:05.146 22:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:34:05.146 22:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:05.405 22:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:05.662 22:17:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:34:06.595 22:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:34:06.595 22:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:06.595 22:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:06.595 22:17:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:06.852 22:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:06.852 22:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:06.852 22:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:06.852 22:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:07.110 22:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:07.110 22:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:07.110 22:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:07.110 22:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:07.368 22:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:07.368 22:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:07.368 22:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:07.368 22:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:07.625 22:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:07.625 22:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:07.625 22:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:07.625 22:17:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:07.882 22:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:07.882 22:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:07.882 22:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:07.882 22:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:08.140 22:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:08.140 22:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:34:08.140 22:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:08.397 22:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:08.655 22:17:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:34:09.588 22:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:34:09.588 22:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:09.588 22:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:09.588 22:17:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:09.846 22:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:09.846 22:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:09.846 22:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:09.846 22:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:10.104 22:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:10.104 22:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:10.104 22:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:10.104 22:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:10.362 22:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:10.362 22:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:10.362 22:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:10.362 22:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:10.619 22:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:10.619 22:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:34:10.620 22:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:10.620 22:17:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:10.877 22:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:10.877 22:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:10.877 22:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:10.877 22:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:11.135 22:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:11.135 22:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:34:11.392 22:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:34:11.392 22:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:34:11.649 22:17:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:11.907 22:17:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:34:12.841 22:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:34:12.841 22:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:12.841 22:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:12.841 22:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:13.099 22:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:13.099 22:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:13.099 22:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:13.099 22:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:13.357 22:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:13.357 22:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:13.357 22:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:13.357 22:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:13.615 22:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:13.615 22:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:13.615 22:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:13.615 22:17:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:13.873 22:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:13.873 22:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:13.873 22:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:13.873 22:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:14.131 22:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:14.131 22:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:14.131 22:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:14.131 22:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:14.387 22:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:14.387 22:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:34:14.387 22:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:14.644 22:17:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:14.902 22:17:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:34:15.836 22:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:34:15.836 22:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:34:15.836 22:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:15.836 22:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:16.095 22:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:16.095 22:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:16.095 22:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:16.095 22:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:16.353 22:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:16.353 22:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:16.353 22:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:16.353 22:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:16.612 22:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:16.612 22:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:16.612 22:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:16.612 22:17:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:16.870 22:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:16.870 22:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:16.870 22:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:16.870 22:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:17.128 22:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:17.128 22:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:17.128 22:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:17.128 22:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:17.386 22:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:17.386 22:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:34:17.386 22:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:17.644 22:17:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:34:17.902 22:17:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:18.865 22:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:18.865 22:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:18.865 22:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:18.865 22:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:19.123 22:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:19.123 22:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:19.123 22:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:19.123 22:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.381 22:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:19.381 22:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:19.381 22:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.381 22:17:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:19.638 22:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:19.639 22:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:19.639 22:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.639 22:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:19.896 22:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:19.896 22:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:19.896 22:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:19.896 22:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:20.154 22:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:20.154 22:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:20.154 22:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:20.154 22:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:20.412 22:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:20.412 22:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:20.412 22:17:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:20.670 22:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:20.928 22:17:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:22.302 22:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:22.302 22:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:22.302 22:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:22.302 22:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:22.302 22:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:22.302 22:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:22.302 22:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:22.302 22:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:22.561 22:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:22.561 22:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:22.561 22:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:22.561 22:17:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:22.819 22:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:22.819 22:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:22.819 22:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:22.819 22:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:23.077 22:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:23.077 22:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:23.077 22:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.077 22:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:23.336 22:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:23.336 22:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:23.336 22:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:23.336 22:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:23.594 22:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:23.594 22:17:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 17073 00:34:23.594 22:17:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 17073 ']' 00:34:23.594 22:17:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 17073 00:34:23.594 22:17:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:34:23.594 22:17:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:23.594 22:17:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 17073 00:34:23.594 22:17:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:34:23.594 22:17:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:34:23.594 22:17:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 17073' 00:34:23.594 killing process with pid 17073 00:34:23.594 22:17:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 17073 00:34:23.594 22:17:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 17073 00:34:24.160 Connection closed with partial response: 00:34:24.160 00:34:24.160 00:34:24.421 22:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 17073 00:34:24.421 22:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:24.421 [2024-07-13 22:17:08.415036] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:24.421 [2024-07-13 22:17:08.415198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid17073 ] 00:34:24.421 EAL: No free 2048 kB hugepages reported on node 1 00:34:24.421 [2024-07-13 22:17:08.540065] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:24.421 [2024-07-13 22:17:08.769555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:24.421 Running I/O for 90 seconds... 00:34:24.421 [2024-07-13 22:17:24.650908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:52640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.421 [2024-07-13 22:17:24.650997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:24.421 [2024-07-13 22:17:24.651096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:52648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.421 [2024-07-13 22:17:24.651127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:24.421 [2024-07-13 22:17:24.651200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:52656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.421 [2024-07-13 22:17:24.651227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:24.421 [2024-07-13 22:17:24.651261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:52664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.421 [2024-07-13 22:17:24.651287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:24.421 [2024-07-13 22:17:24.651321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:52672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.421 [2024-07-13 22:17:24.651345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:24.421 [2024-07-13 22:17:24.651379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:52680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.421 [2024-07-13 22:17:24.651403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:24.421 [2024-07-13 22:17:24.651438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:52688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.421 [2024-07-13 22:17:24.651463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:24.421 [2024-07-13 22:17:24.651497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:52904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.421 [2024-07-13 22:17:24.651523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:24.421 [2024-07-13 22:17:24.651557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:52912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.421 [2024-07-13 22:17:24.651582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:24.421 [2024-07-13 22:17:24.651616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.421 [2024-07-13 22:17:24.651640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:24.421 [2024-07-13 22:17:24.651673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:52928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.421 [2024-07-13 22:17:24.651706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:24.421 [2024-07-13 22:17:24.651741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:52936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.421 [2024-07-13 22:17:24.651766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:24.421 [2024-07-13 22:17:24.651800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.421 [2024-07-13 22:17:24.651824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:24.421 [2024-07-13 22:17:24.651857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.651906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.651945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:52960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.651970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.652004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:52968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.652029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.652064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.652090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.652124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:52984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.652150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.652184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:52992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.652224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.652259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.652284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.652333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:53008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.652359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.652394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:53016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.652419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.652454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:53024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.652479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.652518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:53032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.652544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.652579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.652604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.652639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.652664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.652699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:53056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.652723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.652758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:53064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.652784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.652818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:53072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.652843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.652885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.652911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.652946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:53088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.652972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.653006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.653031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.653066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:53104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.653091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.653125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:53112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.653150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.653185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.653225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.653265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.653290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.653326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.653350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.653384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.653408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.653442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:53152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.653466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.653500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.653524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.653559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:53168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.653583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.653617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:53176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.653641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.653675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.653700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.653734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:53192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.653758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.653792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:53200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.653816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.653875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:53208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.653903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.653939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:53216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.653965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.654000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:53224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.654029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.654066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:53232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.654091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.654127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:53240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.654152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.654187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:53248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.654227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.654263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:53256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.654288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.654322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:53264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.654346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.654380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:53272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.654404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.654439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.422 [2024-07-13 22:17:24.654463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:24.422 [2024-07-13 22:17:24.654496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.654521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.654555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:53296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.654581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.655021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:53304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.655056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.655106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.655136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.655178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:53320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.655209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.655251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:53328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.655278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.655320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.655347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.655388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:53344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.655414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.655454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:53352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.655497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.655536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.655564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.655603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.655628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.655682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:53376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.655709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.655747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:53384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.655772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.655810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.655838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.655903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:53400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.655931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.655971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:53408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.655997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.656037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:53416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.656064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.656108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:53424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.656135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.656174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:53432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.656201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.656241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:53440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.656270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.656310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:53448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.656336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.656375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:53456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.656402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.656441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:53464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.656468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.656509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.656534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.656574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.656601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.656641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:53488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.656668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.656708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:53496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.656734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.656775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:53504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.656801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.656840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:53512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.656873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.656919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:53520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.656947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.656987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:53528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.657012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.657052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:53536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.657078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.657118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.657145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.657202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:53552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.657228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.657395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.657427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.657476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:53568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.657503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.657547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:53576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.657572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.657614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.657642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.657685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:53592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.423 [2024-07-13 22:17:24.657727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.657771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:52696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.423 [2024-07-13 22:17:24.657795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.657837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:52704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.423 [2024-07-13 22:17:24.657889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:24.423 [2024-07-13 22:17:24.657936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:52712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.423 [2024-07-13 22:17:24.657968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.658012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:52720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.424 [2024-07-13 22:17:24.658038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.658081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:52728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.424 [2024-07-13 22:17:24.658108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.658165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:52736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.424 [2024-07-13 22:17:24.658191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.658232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:52744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.424 [2024-07-13 22:17:24.658259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.658301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:52752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.424 [2024-07-13 22:17:24.658326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.658368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:52760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.424 [2024-07-13 22:17:24.658393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.658433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:52768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.424 [2024-07-13 22:17:24.658459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.658500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.424 [2024-07-13 22:17:24.658525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.658566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:52784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.424 [2024-07-13 22:17:24.658591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.658633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:52792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.424 [2024-07-13 22:17:24.658659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.658700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:52800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.424 [2024-07-13 22:17:24.658728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.658769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:52808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.424 [2024-07-13 22:17:24.658799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.658840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.424 [2024-07-13 22:17:24.658892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.658938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.424 [2024-07-13 22:17:24.658964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.659006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:53608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.424 [2024-07-13 22:17:24.659033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.659076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.424 [2024-07-13 22:17:24.659102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.659144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.424 [2024-07-13 22:17:24.659187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.659230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:53632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.424 [2024-07-13 22:17:24.659255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.659296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.424 [2024-07-13 22:17:24.659321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.659363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.424 [2024-07-13 22:17:24.659389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.659430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:52824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.424 [2024-07-13 22:17:24.659454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.659495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:52832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.424 [2024-07-13 22:17:24.659522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.659563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:52840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.424 [2024-07-13 22:17:24.659589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.659630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.424 [2024-07-13 22:17:24.659655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.659699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:52856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.424 [2024-07-13 22:17:24.659725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.659767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:52864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.424 [2024-07-13 22:17:24.659792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.659831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:52872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.424 [2024-07-13 22:17:24.659858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.659926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:52880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.424 [2024-07-13 22:17:24.659953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.659995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:52888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.424 [2024-07-13 22:17:24.660021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.660062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:52896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.424 [2024-07-13 22:17:24.660089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:24.660132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.424 [2024-07-13 22:17:24.660158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:40.275689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.424 [2024-07-13 22:17:40.275796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:40.275909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.424 [2024-07-13 22:17:40.275941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:40.275979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.424 [2024-07-13 22:17:40.276006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:40.276043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.424 [2024-07-13 22:17:40.276086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:40.276124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.424 [2024-07-13 22:17:40.276159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:40.276205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.424 [2024-07-13 22:17:40.276236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:40.276275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.424 [2024-07-13 22:17:40.276301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:40.276337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.424 [2024-07-13 22:17:40.276363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:40.276399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.424 [2024-07-13 22:17:40.276425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:40.276462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.424 [2024-07-13 22:17:40.276487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:24.424 [2024-07-13 22:17:40.276523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.425 [2024-07-13 22:17:40.276548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:24.425 [2024-07-13 22:17:40.276584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.425 [2024-07-13 22:17:40.276610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:24.425 [2024-07-13 22:17:40.276646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.425 [2024-07-13 22:17:40.276672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:24.425 [2024-07-13 22:17:40.276707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.425 [2024-07-13 22:17:40.276734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:24.425 [2024-07-13 22:17:40.276769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.425 [2024-07-13 22:17:40.276795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:24.425 [2024-07-13 22:17:40.276831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.425 [2024-07-13 22:17:40.276856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:24.425 [2024-07-13 22:17:40.276902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.425 [2024-07-13 22:17:40.276928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:24.425 [2024-07-13 22:17:40.276970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.425 [2024-07-13 22:17:40.276996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:24.425 [2024-07-13 22:17:40.277032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.425 [2024-07-13 22:17:40.277058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:24.425 [2024-07-13 22:17:40.277094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.425 [2024-07-13 22:17:40.277120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:24.425 [2024-07-13 22:17:40.277156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.425 [2024-07-13 22:17:40.277181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:24.425 [2024-07-13 22:17:40.277217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.425 [2024-07-13 22:17:40.277243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:24.425 [2024-07-13 22:17:40.277279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:71056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.425 [2024-07-13 22:17:40.277304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:24.425 [2024-07-13 22:17:40.277340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.425 [2024-07-13 22:17:40.277366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:24.425 [2024-07-13 22:17:40.277401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.425 [2024-07-13 22:17:40.277426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:24.425 [2024-07-13 22:17:40.277461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.425 [2024-07-13 22:17:40.277486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:24.425 [2024-07-13 22:17:40.277522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.425 [2024-07-13 22:17:40.277548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:24.425 [2024-07-13 22:17:40.277584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.425 [2024-07-13 22:17:40.277610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:24.425 [2024-07-13 22:17:40.277645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.425 [2024-07-13 22:17:40.277670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:24.425 [2024-07-13 22:17:40.277705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.425 [2024-07-13 22:17:40.277735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:24.425 [2024-07-13 22:17:40.277772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.425 [2024-07-13 22:17:40.277797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:24.425 [2024-07-13 22:17:40.277833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.425 [2024-07-13 22:17:40.277858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:24.425 [2024-07-13 22:17:40.277902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.425 [2024-07-13 22:17:40.277929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:24.425 [2024-07-13 22:17:40.277965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.425 [2024-07-13 22:17:40.277991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:24.425 [2024-07-13 22:17:40.278026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:71320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.425 [2024-07-13 22:17:40.278052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:24.425 [2024-07-13 22:17:40.278088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.425 [2024-07-13 22:17:40.278113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:24.425 [2024-07-13 22:17:40.278149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.425 [2024-07-13 22:17:40.278174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:24.425 [2024-07-13 22:17:40.278210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.425 [2024-07-13 22:17:40.278236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.278271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.426 [2024-07-13 22:17:40.278297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.278333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.426 [2024-07-13 22:17:40.278358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.278393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.426 [2024-07-13 22:17:40.278418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.278455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.426 [2024-07-13 22:17:40.278485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.280338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.426 [2024-07-13 22:17:40.280374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.280419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.426 [2024-07-13 22:17:40.280446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.280483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.426 [2024-07-13 22:17:40.280509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.280545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.426 [2024-07-13 22:17:40.280570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.280606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.426 [2024-07-13 22:17:40.280632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.280668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.426 [2024-07-13 22:17:40.280693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.280729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.426 [2024-07-13 22:17:40.280754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.280790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.426 [2024-07-13 22:17:40.280833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.280880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.426 [2024-07-13 22:17:40.280907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.280944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.426 [2024-07-13 22:17:40.280969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.281006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.426 [2024-07-13 22:17:40.281031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.281066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.426 [2024-07-13 22:17:40.281091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.281136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.426 [2024-07-13 22:17:40.281162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.281198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.426 [2024-07-13 22:17:40.281224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.281259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.426 [2024-07-13 22:17:40.281285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.281320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.426 [2024-07-13 22:17:40.281345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.281380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.426 [2024-07-13 22:17:40.281405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.281441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.426 [2024-07-13 22:17:40.281466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.281502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.426 [2024-07-13 22:17:40.281527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.281562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.426 [2024-07-13 22:17:40.281587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.281623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.426 [2024-07-13 22:17:40.281649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.282339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.426 [2024-07-13 22:17:40.282372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.282430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.426 [2024-07-13 22:17:40.282456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.282492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.426 [2024-07-13 22:17:40.282518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.282558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.426 [2024-07-13 22:17:40.282585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.282620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.426 [2024-07-13 22:17:40.282645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.282680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.426 [2024-07-13 22:17:40.282705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.282740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:24.426 [2024-07-13 22:17:40.282765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:24.426 [2024-07-13 22:17:40.282801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:24.427 [2024-07-13 22:17:40.282827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:24.427 Received shutdown signal, test time was about 32.341469 seconds 00:34:24.427 00:34:24.427 Latency(us) 00:34:24.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:24.427 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:24.427 Verification LBA range: start 0x0 length 0x4000 00:34:24.427 Nvme0n1 : 32.34 5830.43 22.78 0.00 0.00 21917.37 515.79 4026531.84 00:34:24.427 =================================================================================================================== 00:34:24.427 Total : 5830.43 22.78 0.00 0.00 21917.37 515.79 4026531.84 00:34:24.427 22:17:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:24.684 22:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:24.684 22:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:24.684 22:17:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:24.684 22:17:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:24.684 22:17:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:34:24.684 22:17:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:24.684 22:17:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:34:24.684 22:17:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:24.684 22:17:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:24.684 rmmod nvme_tcp 00:34:24.684 rmmod nvme_fabrics 00:34:24.942 rmmod nvme_keyring 00:34:24.942 22:17:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:24.942 22:17:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:34:24.942 22:17:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:34:24.942 22:17:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 16749 ']' 00:34:24.942 22:17:44 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 16749 00:34:24.942 22:17:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 16749 ']' 00:34:24.942 22:17:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 16749 00:34:24.942 22:17:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:34:24.942 22:17:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:24.942 22:17:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 16749 00:34:24.942 22:17:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:24.942 22:17:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:24.942 22:17:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 16749' 00:34:24.942 killing process with pid 16749 00:34:24.942 22:17:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 16749 00:34:24.942 22:17:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 16749 00:34:26.316 22:17:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:26.316 22:17:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:26.316 22:17:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:26.316 22:17:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:26.316 22:17:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:26.316 22:17:45 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:26.316 22:17:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:26.316 22:17:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:28.845 22:17:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:28.845 00:34:28.845 real 0m44.158s 00:34:28.845 user 2m11.155s 00:34:28.845 sys 0m10.097s 00:34:28.845 22:17:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:28.845 22:17:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:28.845 ************************************ 00:34:28.845 END TEST nvmf_host_multipath_status 00:34:28.845 ************************************ 00:34:28.845 22:17:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:28.845 22:17:47 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:28.845 22:17:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:28.845 22:17:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:28.845 22:17:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:28.845 ************************************ 00:34:28.845 START TEST nvmf_discovery_remove_ifc 00:34:28.845 ************************************ 00:34:28.845 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:28.845 * Looking for test storage... 00:34:28.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:28.845 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:28.845 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:28.845 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:28.845 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:28.845 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:28.845 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:28.845 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:28.845 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:28.845 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:28.845 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:28.845 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:28.845 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:28.845 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:34:28.846 22:17:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:30.746 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:30.747 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:30.747 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:30.747 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:30.747 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:30.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:30.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:34:30.747 00:34:30.747 --- 10.0.0.2 ping statistics --- 00:34:30.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:30.747 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:30.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:30.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:34:30.747 00:34:30.747 --- 10.0.0.1 ping statistics --- 00:34:30.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:30.747 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=23407 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 23407 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 23407 ']' 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:30.747 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:30.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:30.748 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:30.748 22:17:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:30.748 [2024-07-13 22:17:49.873366] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:30.748 [2024-07-13 22:17:49.873497] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:30.748 EAL: No free 2048 kB hugepages reported on node 1 00:34:30.748 [2024-07-13 22:17:50.016353] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:31.005 [2024-07-13 22:17:50.275731] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:31.005 [2024-07-13 22:17:50.275802] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:31.005 [2024-07-13 22:17:50.275831] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:31.005 [2024-07-13 22:17:50.275856] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:31.005 [2024-07-13 22:17:50.275890] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:31.006 [2024-07-13 22:17:50.275946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:31.572 22:17:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:31.572 22:17:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:34:31.572 22:17:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:31.572 22:17:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:31.572 22:17:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:31.572 22:17:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:31.572 22:17:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:31.572 22:17:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:31.572 22:17:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:31.572 [2024-07-13 22:17:50.912428] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:31.572 [2024-07-13 22:17:50.920576] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:31.572 null0 00:34:31.572 [2024-07-13 22:17:50.952512] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:31.832 22:17:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:31.832 22:17:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=23584 00:34:31.832 22:17:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:31.832 22:17:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 23584 /tmp/host.sock 00:34:31.832 22:17:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 23584 ']' 00:34:31.832 22:17:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:34:31.832 22:17:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:31.832 22:17:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:31.832 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:31.832 22:17:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:31.832 22:17:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:31.832 [2024-07-13 22:17:51.070469] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:31.832 [2024-07-13 22:17:51.070613] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid23584 ] 00:34:31.832 EAL: No free 2048 kB hugepages reported on node 1 00:34:31.832 [2024-07-13 22:17:51.196160] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:32.090 [2024-07-13 22:17:51.424422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:32.655 22:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:32.655 22:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:34:32.655 22:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:32.655 22:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:32.655 22:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.655 22:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:32.655 22:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:32.655 22:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:32.655 22:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:32.655 22:17:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:33.247 22:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:33.247 22:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:33.247 22:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:33.247 22:17:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:34.185 [2024-07-13 22:17:53.365111] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:34.185 [2024-07-13 22:17:53.365174] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:34.185 [2024-07-13 22:17:53.365233] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:34.185 [2024-07-13 22:17:53.491687] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:34.185 [2024-07-13 22:17:53.556392] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:34.185 [2024-07-13 22:17:53.556485] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:34.185 [2024-07-13 22:17:53.556581] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:34.185 [2024-07-13 22:17:53.556628] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:34.185 [2024-07-13 22:17:53.556690] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:34.185 22:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.185 22:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:34.185 22:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:34.185 22:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:34.185 22:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:34.185 22:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.185 22:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:34.185 22:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:34.185 22:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:34.185 [2024-07-13 22:17:53.563157] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150001f2780 was disconnected and freed. delete nvme_qpair. 00:34:34.185 22:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.443 22:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:34.443 22:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:34:34.443 22:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:34:34.443 22:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:34.443 22:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:34.443 22:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:34.443 22:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.443 22:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:34.443 22:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:34.443 22:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:34.443 22:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:34.443 22:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.443 22:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:34.443 22:17:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:35.375 22:17:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:35.375 22:17:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:35.375 22:17:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:35.375 22:17:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:35.375 22:17:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:35.375 22:17:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:35.375 22:17:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:35.375 22:17:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:35.375 22:17:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:35.375 22:17:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:36.748 22:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:36.748 22:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:36.748 22:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:36.748 22:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:36.748 22:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:36.748 22:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:36.748 22:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:36.748 22:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:36.748 22:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:36.748 22:17:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:37.682 22:17:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:37.682 22:17:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:37.682 22:17:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:37.682 22:17:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:37.682 22:17:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:37.682 22:17:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:37.682 22:17:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:37.682 22:17:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:37.682 22:17:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:37.682 22:17:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:38.616 22:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:38.616 22:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:38.616 22:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.616 22:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:38.616 22:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:38.616 22:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:38.616 22:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:38.616 22:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.616 22:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:38.616 22:17:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:39.550 22:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:39.550 22:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:39.551 22:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:39.551 22:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:39.551 22:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:39.551 22:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:39.551 22:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:39.551 22:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:39.551 22:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:39.551 22:17:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:39.809 [2024-07-13 22:17:58.997843] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:34:39.809 [2024-07-13 22:17:58.997972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:39.809 [2024-07-13 22:17:58.998016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.809 [2024-07-13 22:17:58.998058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:39.809 [2024-07-13 22:17:58.998091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.809 [2024-07-13 22:17:58.998138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:39.809 [2024-07-13 22:17:58.998172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.809 [2024-07-13 22:17:58.998226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:39.809 [2024-07-13 22:17:58.998266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.809 [2024-07-13 22:17:58.998307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:39.809 [2024-07-13 22:17:58.998345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:39.809 [2024-07-13 22:17:58.998384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:34:39.809 [2024-07-13 22:17:59.007864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:34:39.809 [2024-07-13 22:17:59.017949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:40.743 22:17:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:40.743 22:17:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:40.743 22:17:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:40.743 22:17:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:40.743 22:17:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:40.743 22:17:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:40.743 22:17:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:40.743 [2024-07-13 22:18:00.029934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:40.743 [2024-07-13 22:18:00.030051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:34:40.743 [2024-07-13 22:18:00.030116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:34:40.743 [2024-07-13 22:18:00.030237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:34:40.743 [2024-07-13 22:18:00.031123] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:34:40.743 [2024-07-13 22:18:00.031190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:40.743 [2024-07-13 22:18:00.031248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:40.743 [2024-07-13 22:18:00.031287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:40.743 [2024-07-13 22:18:00.031359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:40.743 [2024-07-13 22:18:00.031396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:34:40.743 22:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:40.743 22:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:40.743 22:18:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:41.675 [2024-07-13 22:18:01.033971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:34:41.676 [2024-07-13 22:18:01.034030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:34:41.676 [2024-07-13 22:18:01.034063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:34:41.676 [2024-07-13 22:18:01.034094] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:34:41.676 [2024-07-13 22:18:01.034184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:41.676 [2024-07-13 22:18:01.034272] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:34:41.676 [2024-07-13 22:18:01.034369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:41.676 [2024-07-13 22:18:01.034417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.676 [2024-07-13 22:18:01.034466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:41.676 [2024-07-13 22:18:01.034504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.676 [2024-07-13 22:18:01.034543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:41.676 [2024-07-13 22:18:01.034584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.676 [2024-07-13 22:18:01.034626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:41.676 [2024-07-13 22:18:01.034668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.676 [2024-07-13 22:18:01.034710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:41.676 [2024-07-13 22:18:01.034751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:41.676 [2024-07-13 22:18:01.034791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:34:41.676 [2024-07-13 22:18:01.034947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:34:41.676 [2024-07-13 22:18:01.035920] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:34:41.676 [2024-07-13 22:18:01.035955] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:34:41.676 22:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:41.676 22:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:41.676 22:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:41.676 22:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.676 22:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:41.676 22:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:41.676 22:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:41.676 22:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.933 22:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:34:41.933 22:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:41.933 22:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:41.933 22:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:34:41.933 22:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:41.933 22:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:41.933 22:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:41.933 22:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.933 22:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:41.933 22:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:41.933 22:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:41.934 22:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.934 22:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:41.934 22:18:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:42.866 22:18:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:42.866 22:18:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:42.866 22:18:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:42.866 22:18:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:42.866 22:18:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:42.866 22:18:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:42.866 22:18:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:42.866 22:18:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:42.866 22:18:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:42.866 22:18:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:43.800 [2024-07-13 22:18:03.051640] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:43.800 [2024-07-13 22:18:03.051697] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:43.800 [2024-07-13 22:18:03.051739] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:43.800 [2024-07-13 22:18:03.141081] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:34:44.063 22:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:44.063 22:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:44.063 22:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.063 22:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:44.063 22:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:44.063 22:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:44.063 22:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:44.063 22:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.063 [2024-07-13 22:18:03.243771] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:44.063 [2024-07-13 22:18:03.243854] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:44.063 [2024-07-13 22:18:03.243970] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:44.063 [2024-07-13 22:18:03.244008] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:34:44.063 [2024-07-13 22:18:03.244032] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:44.063 [2024-07-13 22:18:03.249949] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150001f2f00 was disconnected and freed. delete nvme_qpair. 00:34:44.063 22:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:44.063 22:18:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:44.997 22:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:44.997 22:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:44.997 22:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:44.997 22:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:44.997 22:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:44.997 22:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:44.997 22:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:44.997 22:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:44.997 22:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:34:44.997 22:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:34:44.997 22:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 23584 00:34:44.997 22:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 23584 ']' 00:34:44.997 22:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 23584 00:34:44.997 22:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:34:44.997 22:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:44.997 22:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 23584 00:34:44.997 22:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:44.997 22:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:44.997 22:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 23584' 00:34:44.997 killing process with pid 23584 00:34:44.997 22:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 23584 00:34:44.997 22:18:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 23584 00:34:46.372 22:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:34:46.372 22:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:46.372 22:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:34:46.372 22:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:46.372 22:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:34:46.372 22:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:46.372 22:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:46.372 rmmod nvme_tcp 00:34:46.372 rmmod nvme_fabrics 00:34:46.372 rmmod nvme_keyring 00:34:46.372 22:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:46.372 22:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:34:46.372 22:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:34:46.372 22:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 23407 ']' 00:34:46.372 22:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 23407 00:34:46.372 22:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 23407 ']' 00:34:46.372 22:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 23407 00:34:46.372 22:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:34:46.372 22:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:46.372 22:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 23407 00:34:46.372 22:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:46.372 22:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:46.372 22:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 23407' 00:34:46.372 killing process with pid 23407 00:34:46.372 22:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 23407 00:34:46.372 22:18:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 23407 00:34:47.746 22:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:47.746 22:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:47.746 22:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:47.746 22:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:47.746 22:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:47.746 22:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:47.746 22:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:47.746 22:18:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:49.678 22:18:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:49.678 00:34:49.678 real 0m21.158s 00:34:49.678 user 0m31.172s 00:34:49.678 sys 0m3.244s 00:34:49.678 22:18:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:49.678 22:18:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:49.678 ************************************ 00:34:49.678 END TEST nvmf_discovery_remove_ifc 00:34:49.678 ************************************ 00:34:49.678 22:18:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:49.678 22:18:08 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:49.678 22:18:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:49.678 22:18:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:49.678 22:18:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:49.678 ************************************ 00:34:49.678 START TEST nvmf_identify_kernel_target 00:34:49.678 ************************************ 00:34:49.678 22:18:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:49.678 * Looking for test storage... 00:34:49.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:49.678 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:49.678 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:34:49.678 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:49.678 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:49.678 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:49.678 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:49.678 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:49.678 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:49.678 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:49.678 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:34:49.679 22:18:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:51.580 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:51.580 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:34:51.580 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:51.580 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:51.580 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:51.580 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:51.580 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:51.580 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:34:51.580 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:51.580 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:34:51.580 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:34:51.580 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:34:51.580 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:34:51.580 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:34:51.580 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:34:51.580 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:51.580 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:51.580 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:51.580 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:51.581 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:51.581 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:51.581 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:51.581 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:51.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:51.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:34:51.581 00:34:51.581 --- 10.0.0.2 ping statistics --- 00:34:51.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:51.581 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:51.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:51.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:34:51.581 00:34:51.581 --- 10.0.0.1 ping statistics --- 00:34:51.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:51.581 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:51.581 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:51.840 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:34:51.841 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:34:51.841 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:34:51.841 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:34:51.841 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:34:51.841 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.841 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.841 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:34:51.841 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.841 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:34:51.841 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:34:51.841 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:34:51.841 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:34:51.841 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:51.841 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:51.841 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:34:51.841 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:51.841 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:51.841 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:51.841 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:34:51.841 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:34:51.841 22:18:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:34:51.841 22:18:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:51.841 22:18:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:52.780 Waiting for block devices as requested 00:34:52.780 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:52.780 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:53.039 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:53.039 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:53.039 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:53.297 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:53.297 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:53.297 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:53.297 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:53.557 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:53.557 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:53.557 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:53.557 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:53.816 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:53.816 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:53.816 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:54.074 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:54.074 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:34:54.074 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:54.074 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:34:54.074 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:34:54.074 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:54.074 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:34:54.074 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:34:54.074 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:34:54.074 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:54.074 No valid GPT data, bailing 00:34:54.074 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:54.074 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:34:54.074 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:34:54.074 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:34:54.074 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:34:54.074 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:54.074 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:54.074 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:54.074 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:54.074 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:34:54.074 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:34:54.074 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:34:54.074 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:34:54.074 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:34:54.074 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:34:54.074 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:34:54.074 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:54.074 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:34:54.332 00:34:54.332 Discovery Log Number of Records 2, Generation counter 2 00:34:54.332 =====Discovery Log Entry 0====== 00:34:54.332 trtype: tcp 00:34:54.332 adrfam: ipv4 00:34:54.332 subtype: current discovery subsystem 00:34:54.332 treq: not specified, sq flow control disable supported 00:34:54.332 portid: 1 00:34:54.332 trsvcid: 4420 00:34:54.332 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:54.332 traddr: 10.0.0.1 00:34:54.332 eflags: none 00:34:54.332 sectype: none 00:34:54.332 =====Discovery Log Entry 1====== 00:34:54.332 trtype: tcp 00:34:54.332 adrfam: ipv4 00:34:54.332 subtype: nvme subsystem 00:34:54.332 treq: not specified, sq flow control disable supported 00:34:54.332 portid: 1 00:34:54.332 trsvcid: 4420 00:34:54.332 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:54.332 traddr: 10.0.0.1 00:34:54.332 eflags: none 00:34:54.332 sectype: none 00:34:54.332 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:34:54.332 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:34:54.332 EAL: No free 2048 kB hugepages reported on node 1 00:34:54.332 ===================================================== 00:34:54.332 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:54.332 ===================================================== 00:34:54.332 Controller Capabilities/Features 00:34:54.332 ================================ 00:34:54.332 Vendor ID: 0000 00:34:54.332 Subsystem Vendor ID: 0000 00:34:54.332 Serial Number: ade961edde22a3c506e2 00:34:54.332 Model Number: Linux 00:34:54.332 Firmware Version: 6.7.0-68 00:34:54.332 Recommended Arb Burst: 0 00:34:54.332 IEEE OUI Identifier: 00 00 00 00:34:54.332 Multi-path I/O 00:34:54.332 May have multiple subsystem ports: No 00:34:54.332 May have multiple controllers: No 00:34:54.332 Associated with SR-IOV VF: No 00:34:54.332 Max Data Transfer Size: Unlimited 00:34:54.332 Max Number of Namespaces: 0 00:34:54.332 Max Number of I/O Queues: 1024 00:34:54.332 NVMe Specification Version (VS): 1.3 00:34:54.332 NVMe Specification Version (Identify): 1.3 00:34:54.332 Maximum Queue Entries: 1024 00:34:54.332 Contiguous Queues Required: No 00:34:54.332 Arbitration Mechanisms Supported 00:34:54.332 Weighted Round Robin: Not Supported 00:34:54.332 Vendor Specific: Not Supported 00:34:54.333 Reset Timeout: 7500 ms 00:34:54.333 Doorbell Stride: 4 bytes 00:34:54.333 NVM Subsystem Reset: Not Supported 00:34:54.333 Command Sets Supported 00:34:54.333 NVM Command Set: Supported 00:34:54.333 Boot Partition: Not Supported 00:34:54.333 Memory Page Size Minimum: 4096 bytes 00:34:54.333 Memory Page Size Maximum: 4096 bytes 00:34:54.333 Persistent Memory Region: Not Supported 00:34:54.333 Optional Asynchronous Events Supported 00:34:54.333 Namespace Attribute Notices: Not Supported 00:34:54.333 Firmware Activation Notices: Not Supported 00:34:54.333 ANA Change Notices: Not Supported 00:34:54.333 PLE Aggregate Log Change Notices: Not Supported 00:34:54.333 LBA Status Info Alert Notices: Not Supported 00:34:54.333 EGE Aggregate Log Change Notices: Not Supported 00:34:54.333 Normal NVM Subsystem Shutdown event: Not Supported 00:34:54.333 Zone Descriptor Change Notices: Not Supported 00:34:54.333 Discovery Log Change Notices: Supported 00:34:54.333 Controller Attributes 00:34:54.333 128-bit Host Identifier: Not Supported 00:34:54.333 Non-Operational Permissive Mode: Not Supported 00:34:54.333 NVM Sets: Not Supported 00:34:54.333 Read Recovery Levels: Not Supported 00:34:54.333 Endurance Groups: Not Supported 00:34:54.333 Predictable Latency Mode: Not Supported 00:34:54.333 Traffic Based Keep ALive: Not Supported 00:34:54.333 Namespace Granularity: Not Supported 00:34:54.333 SQ Associations: Not Supported 00:34:54.333 UUID List: Not Supported 00:34:54.333 Multi-Domain Subsystem: Not Supported 00:34:54.333 Fixed Capacity Management: Not Supported 00:34:54.333 Variable Capacity Management: Not Supported 00:34:54.333 Delete Endurance Group: Not Supported 00:34:54.333 Delete NVM Set: Not Supported 00:34:54.333 Extended LBA Formats Supported: Not Supported 00:34:54.333 Flexible Data Placement Supported: Not Supported 00:34:54.333 00:34:54.333 Controller Memory Buffer Support 00:34:54.333 ================================ 00:34:54.333 Supported: No 00:34:54.333 00:34:54.333 Persistent Memory Region Support 00:34:54.333 ================================ 00:34:54.333 Supported: No 00:34:54.333 00:34:54.333 Admin Command Set Attributes 00:34:54.333 ============================ 00:34:54.333 Security Send/Receive: Not Supported 00:34:54.333 Format NVM: Not Supported 00:34:54.333 Firmware Activate/Download: Not Supported 00:34:54.333 Namespace Management: Not Supported 00:34:54.333 Device Self-Test: Not Supported 00:34:54.333 Directives: Not Supported 00:34:54.333 NVMe-MI: Not Supported 00:34:54.333 Virtualization Management: Not Supported 00:34:54.333 Doorbell Buffer Config: Not Supported 00:34:54.333 Get LBA Status Capability: Not Supported 00:34:54.333 Command & Feature Lockdown Capability: Not Supported 00:34:54.333 Abort Command Limit: 1 00:34:54.333 Async Event Request Limit: 1 00:34:54.333 Number of Firmware Slots: N/A 00:34:54.333 Firmware Slot 1 Read-Only: N/A 00:34:54.333 Firmware Activation Without Reset: N/A 00:34:54.333 Multiple Update Detection Support: N/A 00:34:54.333 Firmware Update Granularity: No Information Provided 00:34:54.333 Per-Namespace SMART Log: No 00:34:54.333 Asymmetric Namespace Access Log Page: Not Supported 00:34:54.333 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:54.333 Command Effects Log Page: Not Supported 00:34:54.333 Get Log Page Extended Data: Supported 00:34:54.333 Telemetry Log Pages: Not Supported 00:34:54.333 Persistent Event Log Pages: Not Supported 00:34:54.333 Supported Log Pages Log Page: May Support 00:34:54.333 Commands Supported & Effects Log Page: Not Supported 00:34:54.333 Feature Identifiers & Effects Log Page:May Support 00:34:54.333 NVMe-MI Commands & Effects Log Page: May Support 00:34:54.333 Data Area 4 for Telemetry Log: Not Supported 00:34:54.333 Error Log Page Entries Supported: 1 00:34:54.333 Keep Alive: Not Supported 00:34:54.333 00:34:54.333 NVM Command Set Attributes 00:34:54.333 ========================== 00:34:54.333 Submission Queue Entry Size 00:34:54.333 Max: 1 00:34:54.333 Min: 1 00:34:54.333 Completion Queue Entry Size 00:34:54.333 Max: 1 00:34:54.333 Min: 1 00:34:54.333 Number of Namespaces: 0 00:34:54.333 Compare Command: Not Supported 00:34:54.333 Write Uncorrectable Command: Not Supported 00:34:54.333 Dataset Management Command: Not Supported 00:34:54.333 Write Zeroes Command: Not Supported 00:34:54.333 Set Features Save Field: Not Supported 00:34:54.333 Reservations: Not Supported 00:34:54.333 Timestamp: Not Supported 00:34:54.333 Copy: Not Supported 00:34:54.333 Volatile Write Cache: Not Present 00:34:54.333 Atomic Write Unit (Normal): 1 00:34:54.333 Atomic Write Unit (PFail): 1 00:34:54.333 Atomic Compare & Write Unit: 1 00:34:54.333 Fused Compare & Write: Not Supported 00:34:54.333 Scatter-Gather List 00:34:54.333 SGL Command Set: Supported 00:34:54.333 SGL Keyed: Not Supported 00:34:54.333 SGL Bit Bucket Descriptor: Not Supported 00:34:54.333 SGL Metadata Pointer: Not Supported 00:34:54.333 Oversized SGL: Not Supported 00:34:54.333 SGL Metadata Address: Not Supported 00:34:54.333 SGL Offset: Supported 00:34:54.333 Transport SGL Data Block: Not Supported 00:34:54.333 Replay Protected Memory Block: Not Supported 00:34:54.333 00:34:54.333 Firmware Slot Information 00:34:54.333 ========================= 00:34:54.333 Active slot: 0 00:34:54.333 00:34:54.333 00:34:54.333 Error Log 00:34:54.333 ========= 00:34:54.333 00:34:54.333 Active Namespaces 00:34:54.333 ================= 00:34:54.333 Discovery Log Page 00:34:54.333 ================== 00:34:54.333 Generation Counter: 2 00:34:54.333 Number of Records: 2 00:34:54.333 Record Format: 0 00:34:54.333 00:34:54.333 Discovery Log Entry 0 00:34:54.333 ---------------------- 00:34:54.333 Transport Type: 3 (TCP) 00:34:54.333 Address Family: 1 (IPv4) 00:34:54.333 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:54.333 Entry Flags: 00:34:54.333 Duplicate Returned Information: 0 00:34:54.333 Explicit Persistent Connection Support for Discovery: 0 00:34:54.333 Transport Requirements: 00:34:54.333 Secure Channel: Not Specified 00:34:54.333 Port ID: 1 (0x0001) 00:34:54.333 Controller ID: 65535 (0xffff) 00:34:54.333 Admin Max SQ Size: 32 00:34:54.333 Transport Service Identifier: 4420 00:34:54.333 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:54.333 Transport Address: 10.0.0.1 00:34:54.333 Discovery Log Entry 1 00:34:54.333 ---------------------- 00:34:54.333 Transport Type: 3 (TCP) 00:34:54.333 Address Family: 1 (IPv4) 00:34:54.333 Subsystem Type: 2 (NVM Subsystem) 00:34:54.333 Entry Flags: 00:34:54.333 Duplicate Returned Information: 0 00:34:54.333 Explicit Persistent Connection Support for Discovery: 0 00:34:54.333 Transport Requirements: 00:34:54.333 Secure Channel: Not Specified 00:34:54.333 Port ID: 1 (0x0001) 00:34:54.333 Controller ID: 65535 (0xffff) 00:34:54.333 Admin Max SQ Size: 32 00:34:54.333 Transport Service Identifier: 4420 00:34:54.333 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:34:54.333 Transport Address: 10.0.0.1 00:34:54.333 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:54.592 EAL: No free 2048 kB hugepages reported on node 1 00:34:54.592 get_feature(0x01) failed 00:34:54.592 get_feature(0x02) failed 00:34:54.592 get_feature(0x04) failed 00:34:54.592 ===================================================== 00:34:54.592 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:54.592 ===================================================== 00:34:54.592 Controller Capabilities/Features 00:34:54.592 ================================ 00:34:54.592 Vendor ID: 0000 00:34:54.592 Subsystem Vendor ID: 0000 00:34:54.592 Serial Number: 5e297e5b6cb28dcc2e17 00:34:54.592 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:34:54.592 Firmware Version: 6.7.0-68 00:34:54.592 Recommended Arb Burst: 6 00:34:54.592 IEEE OUI Identifier: 00 00 00 00:34:54.592 Multi-path I/O 00:34:54.592 May have multiple subsystem ports: Yes 00:34:54.592 May have multiple controllers: Yes 00:34:54.592 Associated with SR-IOV VF: No 00:34:54.592 Max Data Transfer Size: Unlimited 00:34:54.592 Max Number of Namespaces: 1024 00:34:54.592 Max Number of I/O Queues: 128 00:34:54.592 NVMe Specification Version (VS): 1.3 00:34:54.592 NVMe Specification Version (Identify): 1.3 00:34:54.592 Maximum Queue Entries: 1024 00:34:54.592 Contiguous Queues Required: No 00:34:54.592 Arbitration Mechanisms Supported 00:34:54.592 Weighted Round Robin: Not Supported 00:34:54.592 Vendor Specific: Not Supported 00:34:54.592 Reset Timeout: 7500 ms 00:34:54.592 Doorbell Stride: 4 bytes 00:34:54.592 NVM Subsystem Reset: Not Supported 00:34:54.592 Command Sets Supported 00:34:54.592 NVM Command Set: Supported 00:34:54.592 Boot Partition: Not Supported 00:34:54.592 Memory Page Size Minimum: 4096 bytes 00:34:54.592 Memory Page Size Maximum: 4096 bytes 00:34:54.592 Persistent Memory Region: Not Supported 00:34:54.592 Optional Asynchronous Events Supported 00:34:54.592 Namespace Attribute Notices: Supported 00:34:54.592 Firmware Activation Notices: Not Supported 00:34:54.592 ANA Change Notices: Supported 00:34:54.592 PLE Aggregate Log Change Notices: Not Supported 00:34:54.592 LBA Status Info Alert Notices: Not Supported 00:34:54.592 EGE Aggregate Log Change Notices: Not Supported 00:34:54.592 Normal NVM Subsystem Shutdown event: Not Supported 00:34:54.592 Zone Descriptor Change Notices: Not Supported 00:34:54.592 Discovery Log Change Notices: Not Supported 00:34:54.592 Controller Attributes 00:34:54.592 128-bit Host Identifier: Supported 00:34:54.592 Non-Operational Permissive Mode: Not Supported 00:34:54.592 NVM Sets: Not Supported 00:34:54.592 Read Recovery Levels: Not Supported 00:34:54.592 Endurance Groups: Not Supported 00:34:54.592 Predictable Latency Mode: Not Supported 00:34:54.592 Traffic Based Keep ALive: Supported 00:34:54.592 Namespace Granularity: Not Supported 00:34:54.592 SQ Associations: Not Supported 00:34:54.592 UUID List: Not Supported 00:34:54.593 Multi-Domain Subsystem: Not Supported 00:34:54.593 Fixed Capacity Management: Not Supported 00:34:54.593 Variable Capacity Management: Not Supported 00:34:54.593 Delete Endurance Group: Not Supported 00:34:54.593 Delete NVM Set: Not Supported 00:34:54.593 Extended LBA Formats Supported: Not Supported 00:34:54.593 Flexible Data Placement Supported: Not Supported 00:34:54.593 00:34:54.593 Controller Memory Buffer Support 00:34:54.593 ================================ 00:34:54.593 Supported: No 00:34:54.593 00:34:54.593 Persistent Memory Region Support 00:34:54.593 ================================ 00:34:54.593 Supported: No 00:34:54.593 00:34:54.593 Admin Command Set Attributes 00:34:54.593 ============================ 00:34:54.593 Security Send/Receive: Not Supported 00:34:54.593 Format NVM: Not Supported 00:34:54.593 Firmware Activate/Download: Not Supported 00:34:54.593 Namespace Management: Not Supported 00:34:54.593 Device Self-Test: Not Supported 00:34:54.593 Directives: Not Supported 00:34:54.593 NVMe-MI: Not Supported 00:34:54.593 Virtualization Management: Not Supported 00:34:54.593 Doorbell Buffer Config: Not Supported 00:34:54.593 Get LBA Status Capability: Not Supported 00:34:54.593 Command & Feature Lockdown Capability: Not Supported 00:34:54.593 Abort Command Limit: 4 00:34:54.593 Async Event Request Limit: 4 00:34:54.593 Number of Firmware Slots: N/A 00:34:54.593 Firmware Slot 1 Read-Only: N/A 00:34:54.593 Firmware Activation Without Reset: N/A 00:34:54.593 Multiple Update Detection Support: N/A 00:34:54.593 Firmware Update Granularity: No Information Provided 00:34:54.593 Per-Namespace SMART Log: Yes 00:34:54.593 Asymmetric Namespace Access Log Page: Supported 00:34:54.593 ANA Transition Time : 10 sec 00:34:54.593 00:34:54.593 Asymmetric Namespace Access Capabilities 00:34:54.593 ANA Optimized State : Supported 00:34:54.593 ANA Non-Optimized State : Supported 00:34:54.593 ANA Inaccessible State : Supported 00:34:54.593 ANA Persistent Loss State : Supported 00:34:54.593 ANA Change State : Supported 00:34:54.593 ANAGRPID is not changed : No 00:34:54.593 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:34:54.593 00:34:54.593 ANA Group Identifier Maximum : 128 00:34:54.593 Number of ANA Group Identifiers : 128 00:34:54.593 Max Number of Allowed Namespaces : 1024 00:34:54.593 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:34:54.593 Command Effects Log Page: Supported 00:34:54.593 Get Log Page Extended Data: Supported 00:34:54.593 Telemetry Log Pages: Not Supported 00:34:54.593 Persistent Event Log Pages: Not Supported 00:34:54.593 Supported Log Pages Log Page: May Support 00:34:54.593 Commands Supported & Effects Log Page: Not Supported 00:34:54.593 Feature Identifiers & Effects Log Page:May Support 00:34:54.593 NVMe-MI Commands & Effects Log Page: May Support 00:34:54.593 Data Area 4 for Telemetry Log: Not Supported 00:34:54.593 Error Log Page Entries Supported: 128 00:34:54.593 Keep Alive: Supported 00:34:54.593 Keep Alive Granularity: 1000 ms 00:34:54.593 00:34:54.593 NVM Command Set Attributes 00:34:54.593 ========================== 00:34:54.593 Submission Queue Entry Size 00:34:54.593 Max: 64 00:34:54.593 Min: 64 00:34:54.593 Completion Queue Entry Size 00:34:54.593 Max: 16 00:34:54.593 Min: 16 00:34:54.593 Number of Namespaces: 1024 00:34:54.593 Compare Command: Not Supported 00:34:54.593 Write Uncorrectable Command: Not Supported 00:34:54.593 Dataset Management Command: Supported 00:34:54.593 Write Zeroes Command: Supported 00:34:54.593 Set Features Save Field: Not Supported 00:34:54.593 Reservations: Not Supported 00:34:54.593 Timestamp: Not Supported 00:34:54.593 Copy: Not Supported 00:34:54.593 Volatile Write Cache: Present 00:34:54.593 Atomic Write Unit (Normal): 1 00:34:54.593 Atomic Write Unit (PFail): 1 00:34:54.593 Atomic Compare & Write Unit: 1 00:34:54.593 Fused Compare & Write: Not Supported 00:34:54.593 Scatter-Gather List 00:34:54.593 SGL Command Set: Supported 00:34:54.593 SGL Keyed: Not Supported 00:34:54.593 SGL Bit Bucket Descriptor: Not Supported 00:34:54.593 SGL Metadata Pointer: Not Supported 00:34:54.593 Oversized SGL: Not Supported 00:34:54.593 SGL Metadata Address: Not Supported 00:34:54.593 SGL Offset: Supported 00:34:54.593 Transport SGL Data Block: Not Supported 00:34:54.593 Replay Protected Memory Block: Not Supported 00:34:54.593 00:34:54.593 Firmware Slot Information 00:34:54.593 ========================= 00:34:54.593 Active slot: 0 00:34:54.593 00:34:54.593 Asymmetric Namespace Access 00:34:54.593 =========================== 00:34:54.593 Change Count : 0 00:34:54.593 Number of ANA Group Descriptors : 1 00:34:54.593 ANA Group Descriptor : 0 00:34:54.593 ANA Group ID : 1 00:34:54.593 Number of NSID Values : 1 00:34:54.593 Change Count : 0 00:34:54.593 ANA State : 1 00:34:54.593 Namespace Identifier : 1 00:34:54.593 00:34:54.593 Commands Supported and Effects 00:34:54.593 ============================== 00:34:54.593 Admin Commands 00:34:54.593 -------------- 00:34:54.593 Get Log Page (02h): Supported 00:34:54.593 Identify (06h): Supported 00:34:54.593 Abort (08h): Supported 00:34:54.593 Set Features (09h): Supported 00:34:54.593 Get Features (0Ah): Supported 00:34:54.593 Asynchronous Event Request (0Ch): Supported 00:34:54.593 Keep Alive (18h): Supported 00:34:54.593 I/O Commands 00:34:54.593 ------------ 00:34:54.593 Flush (00h): Supported 00:34:54.593 Write (01h): Supported LBA-Change 00:34:54.593 Read (02h): Supported 00:34:54.593 Write Zeroes (08h): Supported LBA-Change 00:34:54.593 Dataset Management (09h): Supported 00:34:54.593 00:34:54.593 Error Log 00:34:54.593 ========= 00:34:54.593 Entry: 0 00:34:54.593 Error Count: 0x3 00:34:54.593 Submission Queue Id: 0x0 00:34:54.593 Command Id: 0x5 00:34:54.593 Phase Bit: 0 00:34:54.593 Status Code: 0x2 00:34:54.593 Status Code Type: 0x0 00:34:54.593 Do Not Retry: 1 00:34:54.593 Error Location: 0x28 00:34:54.593 LBA: 0x0 00:34:54.593 Namespace: 0x0 00:34:54.593 Vendor Log Page: 0x0 00:34:54.593 ----------- 00:34:54.593 Entry: 1 00:34:54.593 Error Count: 0x2 00:34:54.593 Submission Queue Id: 0x0 00:34:54.593 Command Id: 0x5 00:34:54.593 Phase Bit: 0 00:34:54.593 Status Code: 0x2 00:34:54.593 Status Code Type: 0x0 00:34:54.593 Do Not Retry: 1 00:34:54.593 Error Location: 0x28 00:34:54.593 LBA: 0x0 00:34:54.593 Namespace: 0x0 00:34:54.593 Vendor Log Page: 0x0 00:34:54.593 ----------- 00:34:54.593 Entry: 2 00:34:54.593 Error Count: 0x1 00:34:54.593 Submission Queue Id: 0x0 00:34:54.593 Command Id: 0x4 00:34:54.593 Phase Bit: 0 00:34:54.593 Status Code: 0x2 00:34:54.593 Status Code Type: 0x0 00:34:54.593 Do Not Retry: 1 00:34:54.593 Error Location: 0x28 00:34:54.593 LBA: 0x0 00:34:54.593 Namespace: 0x0 00:34:54.593 Vendor Log Page: 0x0 00:34:54.593 00:34:54.593 Number of Queues 00:34:54.593 ================ 00:34:54.593 Number of I/O Submission Queues: 128 00:34:54.593 Number of I/O Completion Queues: 128 00:34:54.593 00:34:54.593 ZNS Specific Controller Data 00:34:54.593 ============================ 00:34:54.593 Zone Append Size Limit: 0 00:34:54.593 00:34:54.593 00:34:54.593 Active Namespaces 00:34:54.593 ================= 00:34:54.593 get_feature(0x05) failed 00:34:54.593 Namespace ID:1 00:34:54.593 Command Set Identifier: NVM (00h) 00:34:54.593 Deallocate: Supported 00:34:54.593 Deallocated/Unwritten Error: Not Supported 00:34:54.593 Deallocated Read Value: Unknown 00:34:54.593 Deallocate in Write Zeroes: Not Supported 00:34:54.593 Deallocated Guard Field: 0xFFFF 00:34:54.593 Flush: Supported 00:34:54.593 Reservation: Not Supported 00:34:54.593 Namespace Sharing Capabilities: Multiple Controllers 00:34:54.593 Size (in LBAs): 1953525168 (931GiB) 00:34:54.593 Capacity (in LBAs): 1953525168 (931GiB) 00:34:54.593 Utilization (in LBAs): 1953525168 (931GiB) 00:34:54.593 UUID: d6c432c9-4b12-43fd-86c0-d9728bf4f6f0 00:34:54.593 Thin Provisioning: Not Supported 00:34:54.593 Per-NS Atomic Units: Yes 00:34:54.593 Atomic Boundary Size (Normal): 0 00:34:54.593 Atomic Boundary Size (PFail): 0 00:34:54.593 Atomic Boundary Offset: 0 00:34:54.593 NGUID/EUI64 Never Reused: No 00:34:54.593 ANA group ID: 1 00:34:54.593 Namespace Write Protected: No 00:34:54.593 Number of LBA Formats: 1 00:34:54.593 Current LBA Format: LBA Format #00 00:34:54.593 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:54.593 00:34:54.593 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:34:54.593 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:54.593 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:34:54.593 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:54.593 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:34:54.593 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:54.593 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:54.593 rmmod nvme_tcp 00:34:54.593 rmmod nvme_fabrics 00:34:54.594 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:54.594 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:34:54.594 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:34:54.594 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:34:54.594 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:54.594 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:54.594 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:54.594 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:54.594 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:54.594 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:54.594 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:54.594 22:18:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:57.125 22:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:57.125 22:18:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:57.125 22:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:57.125 22:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:34:57.125 22:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:57.125 22:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:57.125 22:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:57.125 22:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:57.125 22:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:34:57.125 22:18:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:34:57.125 22:18:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:58.061 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:58.061 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:58.061 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:58.061 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:58.061 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:58.061 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:58.061 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:58.061 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:58.061 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:58.061 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:58.061 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:58.061 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:58.061 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:58.061 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:58.061 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:58.061 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:59.001 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:59.001 00:34:59.001 real 0m9.399s 00:34:59.001 user 0m2.007s 00:34:59.001 sys 0m3.368s 00:34:59.001 22:18:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:59.001 22:18:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:59.001 ************************************ 00:34:59.001 END TEST nvmf_identify_kernel_target 00:34:59.001 ************************************ 00:34:59.001 22:18:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:59.001 22:18:18 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:59.001 22:18:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:59.001 22:18:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:59.001 22:18:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:59.260 ************************************ 00:34:59.260 START TEST nvmf_auth_host 00:34:59.260 ************************************ 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:59.260 * Looking for test storage... 00:34:59.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:34:59.260 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:34:59.261 22:18:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:01.164 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:01.164 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:01.164 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:01.164 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:01.164 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:01.165 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:01.165 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:01.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:01.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:35:01.165 00:35:01.165 --- 10.0.0.2 ping statistics --- 00:35:01.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.165 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:35:01.165 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:01.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:01.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:35:01.165 00:35:01.165 --- 10.0.0.1 ping statistics --- 00:35:01.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.165 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:35:01.165 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:01.165 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:35:01.165 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:01.165 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:01.165 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:01.165 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:01.165 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:01.165 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:01.165 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:01.165 22:18:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:35:01.165 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:01.165 22:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:01.165 22:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.165 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=31541 00:35:01.165 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:35:01.165 22:18:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 31541 00:35:01.165 22:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 31541 ']' 00:35:01.165 22:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:01.165 22:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:01.165 22:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:01.165 22:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:01.165 22:18:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.099 22:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:02.099 22:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:35:02.099 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:02.099 22:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:02.099 22:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=86262a61f4ad18a928cbf97af75f396e 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.V4V 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 86262a61f4ad18a928cbf97af75f396e 0 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 86262a61f4ad18a928cbf97af75f396e 0 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=86262a61f4ad18a928cbf97af75f396e 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.V4V 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.V4V 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.V4V 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c3cf0bb6c660be7fad0ee3d9d3a9772e3a34ad272e675efe69e1c76d4ef5a91c 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.XZ3 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c3cf0bb6c660be7fad0ee3d9d3a9772e3a34ad272e675efe69e1c76d4ef5a91c 3 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c3cf0bb6c660be7fad0ee3d9d3a9772e3a34ad272e675efe69e1c76d4ef5a91c 3 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c3cf0bb6c660be7fad0ee3d9d3a9772e3a34ad272e675efe69e1c76d4ef5a91c 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:35:02.358 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.XZ3 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.XZ3 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.XZ3 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f8a747135822e7b1028af2dcda053993037d23d1c5b52035 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ONn 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f8a747135822e7b1028af2dcda053993037d23d1c5b52035 0 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f8a747135822e7b1028af2dcda053993037d23d1c5b52035 0 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f8a747135822e7b1028af2dcda053993037d23d1c5b52035 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ONn 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ONn 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.ONn 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4d943d3b37326b57e47391aff15e20641f84d0668aa85c15 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.XEO 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4d943d3b37326b57e47391aff15e20641f84d0668aa85c15 2 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4d943d3b37326b57e47391aff15e20641f84d0668aa85c15 2 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4d943d3b37326b57e47391aff15e20641f84d0668aa85c15 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.XEO 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.XEO 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.XEO 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3204028f474876fcd9c960635a682b8f 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.MHl 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3204028f474876fcd9c960635a682b8f 1 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3204028f474876fcd9c960635a682b8f 1 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3204028f474876fcd9c960635a682b8f 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:35:02.359 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.MHl 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.MHl 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.MHl 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=96e059020241727e8f94dc6101373d64 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.65p 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 96e059020241727e8f94dc6101373d64 1 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 96e059020241727e8f94dc6101373d64 1 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=96e059020241727e8f94dc6101373d64 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.65p 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.65p 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.65p 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ba8cb9d8c76e625108eb69d51c55b6dddf2b9ed97ef64fa1 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.oqR 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ba8cb9d8c76e625108eb69d51c55b6dddf2b9ed97ef64fa1 2 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ba8cb9d8c76e625108eb69d51c55b6dddf2b9ed97ef64fa1 2 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ba8cb9d8c76e625108eb69d51c55b6dddf2b9ed97ef64fa1 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.oqR 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.oqR 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.oqR 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9812831bbad00b6c3c7da68cee482e00 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.l64 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9812831bbad00b6c3c7da68cee482e00 0 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9812831bbad00b6c3c7da68cee482e00 0 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9812831bbad00b6c3c7da68cee482e00 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.l64 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.l64 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.l64 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=07d79836b2aa5f52b54c36de838189bb030638b8caa3970e6d6ac0b06fc8bf58 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ulf 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 07d79836b2aa5f52b54c36de838189bb030638b8caa3970e6d6ac0b06fc8bf58 3 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 07d79836b2aa5f52b54c36de838189bb030638b8caa3970e6d6ac0b06fc8bf58 3 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=07d79836b2aa5f52b54c36de838189bb030638b8caa3970e6d6ac0b06fc8bf58 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ulf 00:35:02.618 22:18:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ulf 00:35:02.619 22:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.ulf 00:35:02.619 22:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:35:02.619 22:18:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 31541 00:35:02.619 22:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 31541 ']' 00:35:02.619 22:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:02.619 22:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:02.619 22:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:02.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:02.619 22:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:02.619 22:18:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.V4V 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.XZ3 ]] 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XZ3 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ONn 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.XEO ]] 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XEO 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.MHl 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.65p ]] 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.65p 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.oqR 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.l64 ]] 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.l64 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.877 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ulf 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:03.137 22:18:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:04.071 Waiting for block devices as requested 00:35:04.071 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:04.071 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:04.329 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:04.329 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:04.329 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:04.588 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:04.588 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:04.588 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:04.588 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:04.870 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:04.870 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:04.870 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:04.870 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:05.133 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:05.133 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:05.133 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:05.133 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:05.699 22:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:05.699 22:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:05.699 22:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:05.699 22:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:35:05.699 22:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:05.699 22:18:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:35:05.699 22:18:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:05.699 22:18:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:05.699 22:18:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:05.699 No valid GPT data, bailing 00:35:05.699 22:18:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:05.699 22:18:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:35:05.699 22:18:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:35:05.699 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:05.699 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:05.699 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:05.699 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:05.699 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:05.699 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:35:05.699 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:35:05.699 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:05.699 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:35:05.699 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:05.699 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:35:05.699 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:35:05.699 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:35:05.699 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:05.699 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:35:05.958 00:35:05.958 Discovery Log Number of Records 2, Generation counter 2 00:35:05.958 =====Discovery Log Entry 0====== 00:35:05.958 trtype: tcp 00:35:05.958 adrfam: ipv4 00:35:05.958 subtype: current discovery subsystem 00:35:05.958 treq: not specified, sq flow control disable supported 00:35:05.958 portid: 1 00:35:05.958 trsvcid: 4420 00:35:05.958 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:05.958 traddr: 10.0.0.1 00:35:05.958 eflags: none 00:35:05.958 sectype: none 00:35:05.958 =====Discovery Log Entry 1====== 00:35:05.958 trtype: tcp 00:35:05.958 adrfam: ipv4 00:35:05.958 subtype: nvme subsystem 00:35:05.958 treq: not specified, sq flow control disable supported 00:35:05.958 portid: 1 00:35:05.958 trsvcid: 4420 00:35:05.958 subnqn: nqn.2024-02.io.spdk:cnode0 00:35:05.958 traddr: 10.0.0.1 00:35:05.958 eflags: none 00:35:05.958 sectype: none 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: ]] 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.958 nvme0n1 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: ]] 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.958 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.217 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.217 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.217 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:06.217 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:06.217 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:06.217 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.217 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.217 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:06.217 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.217 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:06.217 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:06.217 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:06.217 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:06.217 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.217 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.217 nvme0n1 00:35:06.217 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.217 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.217 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.217 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.217 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.217 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.217 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.217 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.217 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.217 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: ]] 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.218 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.477 nvme0n1 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: ]] 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.477 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.736 nvme0n1 00:35:06.736 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.736 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.736 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.736 22:18:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.736 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.736 22:18:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: ]] 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.736 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.995 nvme0n1 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.995 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.253 nvme0n1 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: ]] 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:07.253 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.254 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:07.254 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.254 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.254 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.254 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.254 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:07.254 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:07.254 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:07.254 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.254 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.254 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:07.254 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.254 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:07.254 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:07.254 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:07.254 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:07.254 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.254 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.512 nvme0n1 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: ]] 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.512 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.771 nvme0n1 00:35:07.771 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.771 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.771 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.771 22:18:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.771 22:18:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: ]] 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:07.771 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:07.772 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.030 nvme0n1 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: ]] 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.030 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:08.031 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.031 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:08.031 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:08.031 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:08.031 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:08.031 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.031 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.289 nvme0n1 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.289 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.547 nvme0n1 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: ]] 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.548 22:18:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.806 nvme0n1 00:35:08.806 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.806 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.806 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.806 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.806 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.806 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.806 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.806 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.806 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.806 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.064 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.064 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.064 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:35:09.064 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.064 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:09.064 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:09.064 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:09.064 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:09.064 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:09.064 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:09.064 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:09.064 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:09.064 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: ]] 00:35:09.064 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:09.064 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:35:09.064 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.064 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:09.064 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:09.064 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:09.064 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.064 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:09.065 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.065 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.065 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.065 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.065 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:09.065 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:09.065 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:09.065 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.065 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.065 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:09.065 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.065 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:09.065 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:09.065 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:09.065 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:09.065 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.065 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.324 nvme0n1 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: ]] 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.324 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.583 nvme0n1 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: ]] 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.583 22:18:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.151 nvme0n1 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.151 22:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.410 nvme0n1 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: ]] 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.410 22:18:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.977 nvme0n1 00:35:10.977 22:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.977 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: ]] 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.978 22:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.544 nvme0n1 00:35:11.544 22:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.544 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.544 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.544 22:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.544 22:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.544 22:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.544 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.544 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.544 22:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.544 22:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.544 22:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.544 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.544 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:35:11.544 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.544 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:11.544 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:11.544 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:11.544 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:11.544 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:11.544 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: ]] 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:11.545 22:18:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.109 nvme0n1 00:35:12.109 22:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.109 22:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.109 22:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.109 22:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.109 22:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.109 22:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.366 22:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.366 22:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.366 22:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.366 22:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.366 22:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.366 22:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.366 22:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:35:12.366 22:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.366 22:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:12.366 22:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:12.366 22:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:12.366 22:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:12.366 22:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:12.366 22:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:12.366 22:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:12.366 22:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:12.366 22:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: ]] 00:35:12.366 22:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:12.366 22:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:35:12.367 22:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.367 22:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:12.367 22:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:12.367 22:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:12.367 22:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.367 22:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:12.367 22:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.367 22:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.367 22:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.367 22:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.367 22:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:12.367 22:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:12.367 22:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:12.367 22:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.367 22:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.367 22:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:12.367 22:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.367 22:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:12.367 22:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:12.367 22:18:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:12.367 22:18:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:12.367 22:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.367 22:18:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.932 nvme0n1 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:12.932 22:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.497 nvme0n1 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: ]] 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:13.497 22:18:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.429 nvme0n1 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: ]] 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.429 22:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:14.687 22:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:14.687 22:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:14.687 22:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:14.687 22:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:14.687 22:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.687 22:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.687 22:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:14.687 22:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:14.687 22:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:14.687 22:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:14.687 22:18:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:14.687 22:18:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:14.687 22:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:14.687 22:18:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.620 nvme0n1 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: ]] 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.620 22:18:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.553 nvme0n1 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: ]] 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.553 22:18:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.488 nvme0n1 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.488 22:18:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.457 nvme0n1 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: ]] 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.457 22:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.716 nvme0n1 00:35:18.716 22:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.716 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.716 22:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.716 22:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.716 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.716 22:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.716 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.716 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.716 22:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.716 22:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.716 22:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.716 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.716 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:35:18.716 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.716 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:18.716 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:18.716 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:18.716 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:18.716 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:18.716 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:18.716 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:18.717 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:18.717 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: ]] 00:35:18.717 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:18.717 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:35:18.717 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.717 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:18.717 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:18.717 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:18.717 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.717 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:18.717 22:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.717 22:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.717 22:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.717 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.717 22:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:18.717 22:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:18.717 22:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:18.717 22:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.717 22:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.717 22:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:18.717 22:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.717 22:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:18.717 22:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:18.717 22:18:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:18.717 22:18:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:18.717 22:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.717 22:18:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.975 nvme0n1 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: ]] 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.975 nvme0n1 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.975 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: ]] 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.234 nvme0n1 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.234 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.493 nvme0n1 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: ]] 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.493 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.751 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.751 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.751 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:19.751 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:19.751 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:19.751 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.751 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.751 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:19.751 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.751 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:19.751 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:19.751 22:18:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:19.751 22:18:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:19.751 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.751 22:18:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.751 nvme0n1 00:35:19.751 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.751 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.751 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.751 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.751 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.751 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.751 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.751 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.752 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.752 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.752 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:19.752 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.752 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:35:19.752 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.752 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:19.752 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:19.752 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:19.752 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:19.752 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:19.752 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:19.752 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:19.752 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:19.752 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: ]] 00:35:19.752 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:19.752 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:35:19.752 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.752 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:19.752 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:19.752 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:19.752 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.752 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:19.752 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:19.752 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.752 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.010 nvme0n1 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: ]] 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.010 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.268 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.268 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:20.268 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:20.268 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:20.268 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:20.268 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.268 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.268 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:20.268 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.268 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:20.268 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:20.269 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:20.269 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:20.269 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.269 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.269 nvme0n1 00:35:20.269 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.269 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.269 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:20.269 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.269 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.269 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.269 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.269 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:20.269 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.269 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: ]] 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.527 nvme0n1 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.527 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.786 22:18:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.786 nvme0n1 00:35:20.786 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:20.786 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.786 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:20.786 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.786 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:20.786 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: ]] 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:21.044 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.045 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.045 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:21.045 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.045 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:21.045 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:21.045 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:21.045 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:21.045 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.045 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.302 nvme0n1 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: ]] 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.302 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:21.303 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:21.303 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:21.303 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.303 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.303 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:21.303 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.303 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:21.303 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:21.303 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:21.303 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:21.303 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.303 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.560 nvme0n1 00:35:21.560 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.560 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.560 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.560 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.560 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: ]] 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:21.561 22:18:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:21.819 22:18:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:21.819 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:21.819 22:18:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.077 nvme0n1 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: ]] 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:22.077 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.078 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.078 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.078 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.078 22:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:22.078 22:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:22.078 22:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:22.078 22:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.078 22:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.078 22:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:22.078 22:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.078 22:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:22.078 22:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:22.078 22:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:22.078 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:22.078 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.078 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.336 nvme0n1 00:35:22.336 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.336 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.336 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.336 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.336 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.336 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.336 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.336 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.336 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.336 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.336 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.336 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.336 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:22.336 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.336 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:22.336 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:22.336 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:22.336 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:22.336 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:22.336 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:22.336 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:22.336 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:22.336 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:22.337 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:22.337 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.337 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:22.337 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:22.337 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:22.337 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.337 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:22.337 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.337 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.337 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.337 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.337 22:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:22.337 22:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:22.337 22:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:22.337 22:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.337 22:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.337 22:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:22.337 22:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.337 22:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:22.337 22:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:22.337 22:18:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:22.337 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:22.337 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.337 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.595 nvme0n1 00:35:22.595 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.595 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.595 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.595 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.595 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.595 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.853 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.853 22:18:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.853 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.853 22:18:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: ]] 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:22.853 22:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:22.854 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:22.854 22:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.854 22:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.420 nvme0n1 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: ]] 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.420 22:18:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.997 nvme0n1 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: ]] 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:23.997 22:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.570 nvme0n1 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: ]] 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:24.570 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.571 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:24.571 22:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.571 22:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.571 22:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:24.571 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.571 22:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:24.571 22:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:24.571 22:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:24.571 22:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.571 22:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.571 22:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:24.571 22:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.571 22:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:24.571 22:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:24.571 22:18:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:24.571 22:18:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:24.571 22:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:24.571 22:18:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.136 nvme0n1 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.136 22:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:25.137 22:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.137 22:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:25.137 22:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:25.137 22:18:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:25.137 22:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:25.137 22:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.137 22:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.702 nvme0n1 00:35:25.702 22:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.702 22:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.702 22:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.702 22:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.702 22:18:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.702 22:18:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.702 22:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.702 22:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.702 22:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.702 22:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.702 22:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.702 22:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:25.702 22:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.702 22:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:25.702 22:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.702 22:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:25.702 22:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:25.702 22:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:25.702 22:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:25.702 22:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:25.702 22:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:25.702 22:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:25.703 22:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:25.703 22:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: ]] 00:35:25.703 22:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:25.703 22:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:25.703 22:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.703 22:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:25.703 22:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:25.703 22:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:25.703 22:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.703 22:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:25.703 22:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.703 22:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.703 22:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:25.703 22:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.703 22:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:25.703 22:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:25.703 22:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:25.703 22:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.703 22:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.703 22:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:25.703 22:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.703 22:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:25.703 22:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:25.703 22:18:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:25.703 22:18:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:25.703 22:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:25.703 22:18:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.636 nvme0n1 00:35:26.636 22:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.636 22:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.636 22:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.636 22:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:26.636 22:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.636 22:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: ]] 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:26.894 22:18:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.827 nvme0n1 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: ]] 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.827 22:18:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.761 nvme0n1 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: ]] 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.761 22:18:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.695 nvme0n1 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:29.695 22:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:29.696 22:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.696 22:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:29.696 22:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.696 22:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:29.696 22:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:29.696 22:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:29.696 22:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:29.696 22:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:29.696 22:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:29.696 22:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:29.696 22:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:29.696 22:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:29.696 22:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:29.696 22:18:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:29.696 22:18:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:29.696 22:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.696 22:18:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.069 nvme0n1 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: ]] 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.069 nvme0n1 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: ]] 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.069 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.070 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.070 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.070 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:31.070 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:31.070 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:31.070 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.070 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.070 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:31.070 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.070 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:31.070 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:31.070 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:31.070 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:31.070 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.070 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.328 nvme0n1 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: ]] 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.328 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:31.329 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.329 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:31.329 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:31.329 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:31.329 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:31.329 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.329 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.587 nvme0n1 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: ]] 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.587 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.847 nvme0n1 00:35:31.847 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.847 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.847 22:18:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.847 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.847 22:18:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.847 nvme0n1 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:31.847 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: ]] 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.139 nvme0n1 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.139 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.140 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.140 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.140 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.398 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.398 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.398 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.398 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.398 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.398 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.398 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:32.398 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.398 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:32.398 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:32.398 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:32.398 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:32.398 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:32.398 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: ]] 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.399 nvme0n1 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.399 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: ]] 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.657 22:18:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.657 nvme0n1 00:35:32.657 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.657 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.657 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.657 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.657 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.657 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: ]] 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.916 nvme0n1 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:32.916 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:33.175 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:33.176 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:33.176 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.176 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.176 nvme0n1 00:35:33.176 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.176 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.176 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.176 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.176 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: ]] 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.434 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.694 nvme0n1 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: ]] 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:33.694 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:33.695 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:33.695 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.695 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.695 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:33.695 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.695 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:33.695 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:33.695 22:18:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:33.695 22:18:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:33.695 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.695 22:18:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.953 nvme0n1 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: ]] 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:33.954 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.524 nvme0n1 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: ]] 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.524 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.784 nvme0n1 00:35:34.784 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.784 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:34.784 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.784 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.784 22:18:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:34.784 22:18:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.784 22:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.044 nvme0n1 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: ]] 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.044 22:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.610 nvme0n1 00:35:35.610 22:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.610 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:35.610 22:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.610 22:18:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:35.610 22:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.610 22:18:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.610 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:35.610 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:35.610 22:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: ]] 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:35.868 22:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.435 nvme0n1 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: ]] 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:36.435 22:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:36.436 22:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:36.436 22:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:36.436 22:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:36.436 22:18:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:36.436 22:18:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:36.436 22:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.436 22:18:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.002 nvme0n1 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: ]] 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.002 22:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.568 nvme0n1 00:35:37.568 22:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.568 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:37.568 22:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.568 22:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.568 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:37.568 22:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.568 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:37.568 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:37.568 22:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.568 22:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.568 22:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.568 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:37.568 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:35:37.568 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:37.568 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:37.568 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:37.568 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:37.568 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:37.568 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:37.568 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:37.568 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:37.569 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:37.569 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:37.569 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:35:37.569 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:37.569 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:37.569 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:37.569 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:37.569 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:37.569 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:37.569 22:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.569 22:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:37.569 22:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:37.569 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:37.569 22:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:37.569 22:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:37.569 22:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:37.569 22:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.569 22:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.569 22:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:37.569 22:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.569 22:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:37.569 22:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:37.569 22:18:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:37.569 22:18:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:37.569 22:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:37.569 22:18:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.132 nvme0n1 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODYyNjJhNjFmNGFkMThhOTI4Y2JmOTdhZjc1ZjM5NmV1heB1: 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: ]] 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzNjZjBiYjZjNjYwYmU3ZmFkMGVlM2Q5ZDNhOTc3MmUzYTM0YWQyNzJlNjc1ZWZlNjllMWM3NmQ0ZWY1YTkxY1Co4Vc=: 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:38.132 22:18:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.065 nvme0n1 00:35:39.065 22:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: ]] 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.325 22:18:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.261 nvme0n1 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzIwNDAyOGY0NzQ4NzZmY2Q5Yzk2MDYzNWE2ODJiOGaDcvdz: 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: ]] 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OTZlMDU5MDIwMjQxNzI3ZThmOTRkYzYxMDEzNzNkNjQ8hA0h: 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:40.261 22:18:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.199 nvme0n1 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmE4Y2I5ZDhjNzZlNjI1MTA4ZWI2OWQ1MWM1NWI2ZGRkZjJiOWVkOTdlZjY0ZmExF1m8ow==: 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: ]] 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMjgzMWJiYWQwMGI2YzNjN2RhNjhjZWU0ODJlMDAue0MC: 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:41.199 22:19:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.136 nvme0n1 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDdkNzk4MzZiMmFhNWY1MmI1NGMzNmRlODM4MTg5YmIwMzA2MzhiOGNhYTM5NzBlNmQ2YWMwYjA2ZmM4YmY1OMvLhAI=: 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:42.137 22:19:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.513 nvme0n1 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjhhNzQ3MTM1ODIyZTdiMTAyOGFmMmRjZGEwNTM5OTMwMzdkMjNkMWM1YjUyMDM1/OVHoA==: 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: ]] 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGQ5NDNkM2IzNzMyNmI1N2U0NzM5MWFmZjE1ZTIwNjQxZjg0ZDA2NjhhYTg1YzE148sjWw==: 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.513 request: 00:35:43.513 { 00:35:43.513 "name": "nvme0", 00:35:43.513 "trtype": "tcp", 00:35:43.513 "traddr": "10.0.0.1", 00:35:43.513 "adrfam": "ipv4", 00:35:43.513 "trsvcid": "4420", 00:35:43.513 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:43.513 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:43.513 "prchk_reftag": false, 00:35:43.513 "prchk_guard": false, 00:35:43.513 "hdgst": false, 00:35:43.513 "ddgst": false, 00:35:43.513 "method": "bdev_nvme_attach_controller", 00:35:43.513 "req_id": 1 00:35:43.513 } 00:35:43.513 Got JSON-RPC error response 00:35:43.513 response: 00:35:43.513 { 00:35:43.513 "code": -5, 00:35:43.513 "message": "Input/output error" 00:35:43.513 } 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.513 request: 00:35:43.513 { 00:35:43.513 "name": "nvme0", 00:35:43.513 "trtype": "tcp", 00:35:43.513 "traddr": "10.0.0.1", 00:35:43.513 "adrfam": "ipv4", 00:35:43.513 "trsvcid": "4420", 00:35:43.513 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:43.513 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:43.513 "prchk_reftag": false, 00:35:43.513 "prchk_guard": false, 00:35:43.513 "hdgst": false, 00:35:43.513 "ddgst": false, 00:35:43.513 "dhchap_key": "key2", 00:35:43.513 "method": "bdev_nvme_attach_controller", 00:35:43.513 "req_id": 1 00:35:43.513 } 00:35:43.513 Got JSON-RPC error response 00:35:43.513 response: 00:35:43.513 { 00:35:43.513 "code": -5, 00:35:43.513 "message": "Input/output error" 00:35:43.513 } 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:43.513 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:43.514 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:43.514 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:35:43.514 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:43.514 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:35:43.514 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:43.514 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:35:43.514 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:43.514 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:43.514 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:43.514 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:43.514 request: 00:35:43.514 { 00:35:43.514 "name": "nvme0", 00:35:43.514 "trtype": "tcp", 00:35:43.514 "traddr": "10.0.0.1", 00:35:43.514 "adrfam": "ipv4", 00:35:43.514 "trsvcid": "4420", 00:35:43.514 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:43.514 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:43.514 "prchk_reftag": false, 00:35:43.514 "prchk_guard": false, 00:35:43.514 "hdgst": false, 00:35:43.514 "ddgst": false, 00:35:43.514 "dhchap_key": "key1", 00:35:43.514 "dhchap_ctrlr_key": "ckey2", 00:35:43.514 "method": "bdev_nvme_attach_controller", 00:35:43.514 "req_id": 1 00:35:43.514 } 00:35:43.514 Got JSON-RPC error response 00:35:43.514 response: 00:35:43.514 { 00:35:43.514 "code": -5, 00:35:43.514 "message": "Input/output error" 00:35:43.514 } 00:35:43.514 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:35:43.514 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:35:43.514 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:43.514 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:43.514 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:43.514 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:35:43.514 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:35:43.514 22:19:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:35:43.514 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:43.514 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:35:43.514 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:43.514 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:35:43.514 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:43.514 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:43.514 rmmod nvme_tcp 00:35:43.773 rmmod nvme_fabrics 00:35:43.773 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:43.773 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:35:43.773 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:35:43.773 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 31541 ']' 00:35:43.773 22:19:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 31541 00:35:43.773 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 31541 ']' 00:35:43.773 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 31541 00:35:43.773 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:35:43.773 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:43.773 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 31541 00:35:43.773 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:43.773 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:43.773 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 31541' 00:35:43.773 killing process with pid 31541 00:35:43.773 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 31541 00:35:43.773 22:19:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 31541 00:35:44.705 22:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:44.705 22:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:44.705 22:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:44.705 22:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:44.705 22:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:44.705 22:19:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:44.705 22:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:44.705 22:19:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:47.277 22:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:47.277 22:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:47.277 22:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:47.277 22:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:35:47.277 22:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:35:47.277 22:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:35:47.277 22:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:47.277 22:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:47.277 22:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:47.277 22:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:47.277 22:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:35:47.277 22:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:35:47.277 22:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:48.214 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:48.214 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:48.214 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:48.214 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:48.214 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:48.214 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:48.214 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:48.214 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:48.214 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:48.214 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:48.214 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:48.214 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:48.214 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:48.214 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:48.214 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:48.214 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:49.154 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:49.154 22:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.V4V /tmp/spdk.key-null.ONn /tmp/spdk.key-sha256.MHl /tmp/spdk.key-sha384.oqR /tmp/spdk.key-sha512.ulf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:35:49.154 22:19:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:50.530 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:50.530 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:50.530 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:50.530 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:50.530 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:50.530 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:50.530 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:50.530 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:50.530 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:50.530 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:50.530 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:50.530 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:50.530 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:50.530 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:50.530 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:50.530 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:50.530 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:50.530 00:35:50.530 real 0m51.322s 00:35:50.530 user 0m49.015s 00:35:50.530 sys 0m5.923s 00:35:50.530 22:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:50.530 22:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:50.530 ************************************ 00:35:50.530 END TEST nvmf_auth_host 00:35:50.530 ************************************ 00:35:50.530 22:19:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:35:50.530 22:19:09 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:35:50.530 22:19:09 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:50.530 22:19:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:50.530 22:19:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:50.530 22:19:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:50.530 ************************************ 00:35:50.530 START TEST nvmf_digest 00:35:50.530 ************************************ 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:50.530 * Looking for test storage... 00:35:50.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:35:50.530 22:19:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:52.436 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:52.436 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:52.436 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:52.437 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:52.437 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:52.437 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:52.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:52.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:35:52.695 00:35:52.695 --- 10.0.0.2 ping statistics --- 00:35:52.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:52.695 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:52.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:52.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:35:52.695 00:35:52.695 --- 10.0.0.1 ping statistics --- 00:35:52.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:52.695 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:52.695 ************************************ 00:35:52.695 START TEST nvmf_digest_clean 00:35:52.695 ************************************ 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=41235 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 41235 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 41235 ']' 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:52.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:52.695 22:19:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:52.695 [2024-07-13 22:19:12.015593] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:35:52.695 [2024-07-13 22:19:12.015740] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:52.952 EAL: No free 2048 kB hugepages reported on node 1 00:35:52.952 [2024-07-13 22:19:12.147503] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:53.212 [2024-07-13 22:19:12.402530] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:53.212 [2024-07-13 22:19:12.402607] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:53.212 [2024-07-13 22:19:12.402637] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:53.212 [2024-07-13 22:19:12.402662] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:53.212 [2024-07-13 22:19:12.402684] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:53.212 [2024-07-13 22:19:12.402738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:53.781 22:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:53.781 22:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:35:53.781 22:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:53.781 22:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:53.781 22:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:53.781 22:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:53.781 22:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:53.781 22:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:53.781 22:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:53.781 22:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.781 22:19:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:54.051 null0 00:35:54.051 [2024-07-13 22:19:13.350568] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:54.051 [2024-07-13 22:19:13.374780] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:54.051 22:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:54.051 22:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:54.051 22:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:54.051 22:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:54.051 22:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:54.051 22:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:54.051 22:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:54.051 22:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:54.051 22:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=41390 00:35:54.051 22:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:54.051 22:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 41390 /var/tmp/bperf.sock 00:35:54.051 22:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 41390 ']' 00:35:54.051 22:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:54.051 22:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:54.051 22:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:54.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:54.051 22:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:54.051 22:19:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:54.314 [2024-07-13 22:19:13.466934] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:35:54.314 [2024-07-13 22:19:13.467087] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid41390 ] 00:35:54.314 EAL: No free 2048 kB hugepages reported on node 1 00:35:54.314 [2024-07-13 22:19:13.600974] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:54.571 [2024-07-13 22:19:13.852389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:55.137 22:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:55.137 22:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:35:55.137 22:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:55.137 22:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:55.137 22:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:55.705 22:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:55.705 22:19:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:55.964 nvme0n1 00:35:55.964 22:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:55.964 22:19:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:56.224 Running I/O for 2 seconds... 00:35:58.131 00:35:58.131 Latency(us) 00:35:58.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:58.131 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:58.131 nvme0n1 : 2.01 14468.34 56.52 0.00 0.00 8833.84 4077.80 21554.06 00:35:58.131 =================================================================================================================== 00:35:58.131 Total : 14468.34 56.52 0.00 0.00 8833.84 4077.80 21554.06 00:35:58.131 0 00:35:58.131 22:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:58.131 22:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:58.131 22:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:58.131 22:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:58.131 22:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:58.131 | select(.opcode=="crc32c") 00:35:58.131 | "\(.module_name) \(.executed)"' 00:35:58.391 22:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:58.391 22:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:58.391 22:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:58.391 22:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:58.391 22:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 41390 00:35:58.391 22:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 41390 ']' 00:35:58.391 22:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 41390 00:35:58.391 22:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:35:58.391 22:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:58.391 22:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 41390 00:35:58.391 22:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:35:58.391 22:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:35:58.391 22:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 41390' 00:35:58.391 killing process with pid 41390 00:35:58.391 22:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 41390 00:35:58.391 Received shutdown signal, test time was about 2.000000 seconds 00:35:58.391 00:35:58.391 Latency(us) 00:35:58.391 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:58.391 =================================================================================================================== 00:35:58.391 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:58.391 22:19:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 41390 00:35:59.329 22:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:59.329 22:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:59.329 22:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:59.329 22:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:59.329 22:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:59.329 22:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:59.329 22:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:59.329 22:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=42012 00:35:59.329 22:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:59.329 22:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 42012 /var/tmp/bperf.sock 00:35:59.329 22:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 42012 ']' 00:35:59.329 22:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:59.329 22:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:59.329 22:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:59.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:59.329 22:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:59.329 22:19:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:59.588 [2024-07-13 22:19:18.765000] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:35:59.588 [2024-07-13 22:19:18.765159] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42012 ] 00:35:59.588 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:59.588 Zero copy mechanism will not be used. 00:35:59.588 EAL: No free 2048 kB hugepages reported on node 1 00:35:59.588 [2024-07-13 22:19:18.886108] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:59.847 [2024-07-13 22:19:19.116056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:00.413 22:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:00.413 22:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:00.413 22:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:00.413 22:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:00.413 22:19:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:00.979 22:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:00.979 22:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:01.544 nvme0n1 00:36:01.544 22:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:01.544 22:19:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:01.544 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:01.544 Zero copy mechanism will not be used. 00:36:01.545 Running I/O for 2 seconds... 00:36:03.489 00:36:03.489 Latency(us) 00:36:03.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:03.489 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:03.489 nvme0n1 : 2.01 2407.83 300.98 0.00 0.00 6636.21 5704.06 8641.04 00:36:03.489 =================================================================================================================== 00:36:03.489 Total : 2407.83 300.98 0.00 0.00 6636.21 5704.06 8641.04 00:36:03.489 0 00:36:03.489 22:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:03.489 22:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:03.489 22:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:03.489 22:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:03.489 22:19:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:03.490 | select(.opcode=="crc32c") 00:36:03.490 | "\(.module_name) \(.executed)"' 00:36:03.747 22:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:03.747 22:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:03.747 22:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:03.747 22:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:03.747 22:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 42012 00:36:03.747 22:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 42012 ']' 00:36:03.747 22:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 42012 00:36:03.747 22:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:03.747 22:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:03.747 22:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 42012 00:36:03.747 22:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:03.747 22:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:03.748 22:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 42012' 00:36:03.748 killing process with pid 42012 00:36:03.748 22:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 42012 00:36:03.748 Received shutdown signal, test time was about 2.000000 seconds 00:36:03.748 00:36:03.748 Latency(us) 00:36:03.748 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:03.748 =================================================================================================================== 00:36:03.748 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:03.748 22:19:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 42012 00:36:05.148 22:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:36:05.148 22:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:05.148 22:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:05.148 22:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:05.148 22:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:36:05.148 22:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:36:05.148 22:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:05.148 22:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=42604 00:36:05.148 22:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:36:05.148 22:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 42604 /var/tmp/bperf.sock 00:36:05.148 22:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 42604 ']' 00:36:05.148 22:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:05.148 22:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:05.148 22:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:05.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:05.148 22:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:05.148 22:19:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:05.148 [2024-07-13 22:19:24.221175] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:05.148 [2024-07-13 22:19:24.221319] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid42604 ] 00:36:05.148 EAL: No free 2048 kB hugepages reported on node 1 00:36:05.148 [2024-07-13 22:19:24.354805] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:05.409 [2024-07-13 22:19:24.617045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:05.975 22:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:05.975 22:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:05.975 22:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:05.975 22:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:05.975 22:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:06.543 22:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:06.543 22:19:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:06.801 nvme0n1 00:36:06.801 22:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:06.801 22:19:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:07.061 Running I/O for 2 seconds... 00:36:08.969 00:36:08.969 Latency(us) 00:36:08.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:08.969 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:08.969 nvme0n1 : 2.01 14743.05 57.59 0.00 0.00 8656.64 6893.42 16117.00 00:36:08.969 =================================================================================================================== 00:36:08.969 Total : 14743.05 57.59 0.00 0.00 8656.64 6893.42 16117.00 00:36:08.969 0 00:36:08.969 22:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:08.969 22:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:08.969 22:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:08.969 22:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:08.969 | select(.opcode=="crc32c") 00:36:08.969 | "\(.module_name) \(.executed)"' 00:36:08.969 22:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:09.227 22:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:09.227 22:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:09.227 22:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:09.227 22:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:09.227 22:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 42604 00:36:09.227 22:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 42604 ']' 00:36:09.227 22:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 42604 00:36:09.227 22:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:09.227 22:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:09.227 22:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 42604 00:36:09.227 22:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:09.227 22:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:09.227 22:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 42604' 00:36:09.227 killing process with pid 42604 00:36:09.227 22:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 42604 00:36:09.227 Received shutdown signal, test time was about 2.000000 seconds 00:36:09.227 00:36:09.227 Latency(us) 00:36:09.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:09.227 =================================================================================================================== 00:36:09.227 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:09.227 22:19:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 42604 00:36:10.606 22:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:36:10.606 22:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:36:10.606 22:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:36:10.606 22:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:36:10.606 22:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:36:10.606 22:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:36:10.606 22:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:36:10.606 22:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=43262 00:36:10.606 22:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 43262 /var/tmp/bperf.sock 00:36:10.606 22:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:36:10.606 22:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 43262 ']' 00:36:10.606 22:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:10.606 22:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:10.606 22:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:10.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:10.606 22:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:10.606 22:19:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:10.606 [2024-07-13 22:19:29.722797] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:10.606 [2024-07-13 22:19:29.722974] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43262 ] 00:36:10.606 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:10.606 Zero copy mechanism will not be used. 00:36:10.606 EAL: No free 2048 kB hugepages reported on node 1 00:36:10.606 [2024-07-13 22:19:29.855474] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:10.864 [2024-07-13 22:19:30.111605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:11.430 22:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:11.430 22:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:36:11.430 22:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:36:11.430 22:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:36:11.430 22:19:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:11.999 22:19:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:11.999 22:19:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:12.258 nvme0n1 00:36:12.517 22:19:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:36:12.517 22:19:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:12.517 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:12.517 Zero copy mechanism will not be used. 00:36:12.517 Running I/O for 2 seconds... 00:36:14.426 00:36:14.427 Latency(us) 00:36:14.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:14.427 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:14.427 nvme0n1 : 2.01 1974.12 246.76 0.00 0.00 8080.72 6068.15 16311.18 00:36:14.427 =================================================================================================================== 00:36:14.427 Total : 1974.12 246.76 0.00 0.00 8080.72 6068.15 16311.18 00:36:14.427 0 00:36:14.427 22:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:36:14.427 22:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:36:14.427 22:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:36:14.427 22:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:36:14.427 | select(.opcode=="crc32c") 00:36:14.427 | "\(.module_name) \(.executed)"' 00:36:14.427 22:19:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:36:14.686 22:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:36:14.686 22:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:36:14.686 22:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:36:14.686 22:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:36:14.686 22:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 43262 00:36:14.686 22:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 43262 ']' 00:36:14.686 22:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 43262 00:36:14.686 22:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:14.686 22:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:14.686 22:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 43262 00:36:14.686 22:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:14.686 22:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:14.686 22:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 43262' 00:36:14.686 killing process with pid 43262 00:36:14.686 22:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 43262 00:36:14.686 Received shutdown signal, test time was about 2.000000 seconds 00:36:14.686 00:36:14.686 Latency(us) 00:36:14.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:14.686 =================================================================================================================== 00:36:14.686 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:14.686 22:19:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 43262 00:36:16.065 22:19:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 41235 00:36:16.065 22:19:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 41235 ']' 00:36:16.065 22:19:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 41235 00:36:16.065 22:19:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:36:16.065 22:19:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:16.065 22:19:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 41235 00:36:16.065 22:19:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:16.065 22:19:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:16.065 22:19:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 41235' 00:36:16.065 killing process with pid 41235 00:36:16.065 22:19:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 41235 00:36:16.065 22:19:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 41235 00:36:17.002 00:36:17.002 real 0m24.453s 00:36:17.002 user 0m47.452s 00:36:17.002 sys 0m4.563s 00:36:17.002 22:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:17.002 22:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:36:17.002 ************************************ 00:36:17.002 END TEST nvmf_digest_clean 00:36:17.002 ************************************ 00:36:17.261 22:19:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:36:17.261 22:19:36 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:36:17.261 22:19:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:17.261 22:19:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:17.261 22:19:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:17.261 ************************************ 00:36:17.261 START TEST nvmf_digest_error 00:36:17.261 ************************************ 00:36:17.261 22:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:36:17.261 22:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:36:17.261 22:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:17.261 22:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:17.261 22:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:17.261 22:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=44086 00:36:17.261 22:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:36:17.261 22:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 44086 00:36:17.261 22:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 44086 ']' 00:36:17.261 22:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:17.261 22:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:17.261 22:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:17.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:17.261 22:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:17.261 22:19:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:17.261 [2024-07-13 22:19:36.528424] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:17.261 [2024-07-13 22:19:36.528592] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:17.261 EAL: No free 2048 kB hugepages reported on node 1 00:36:17.520 [2024-07-13 22:19:36.668854] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:17.778 [2024-07-13 22:19:36.927273] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:17.778 [2024-07-13 22:19:36.927343] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:17.778 [2024-07-13 22:19:36.927375] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:17.778 [2024-07-13 22:19:36.927401] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:17.778 [2024-07-13 22:19:36.927422] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:17.778 [2024-07-13 22:19:36.927471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:18.036 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:18.036 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:18.036 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:18.036 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:18.036 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:18.348 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:18.348 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:36:18.348 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.348 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:18.348 [2024-07-13 22:19:37.449655] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:36:18.348 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.348 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:36:18.348 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:36:18.348 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:18.348 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:18.606 null0 00:36:18.606 [2024-07-13 22:19:37.821530] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:18.606 [2024-07-13 22:19:37.845757] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:18.606 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:18.606 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:36:18.606 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:18.606 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:18.606 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:18.606 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:18.606 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=44245 00:36:18.606 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:36:18.606 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 44245 /var/tmp/bperf.sock 00:36:18.606 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 44245 ']' 00:36:18.606 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:18.606 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:18.606 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:18.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:18.606 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:18.606 22:19:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:18.607 [2024-07-13 22:19:37.922017] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:18.607 [2024-07-13 22:19:37.922148] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid44245 ] 00:36:18.607 EAL: No free 2048 kB hugepages reported on node 1 00:36:18.866 [2024-07-13 22:19:38.046542] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:19.126 [2024-07-13 22:19:38.296395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:19.692 22:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:19.692 22:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:19.692 22:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:19.692 22:19:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:19.951 22:19:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:19.951 22:19:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:19.951 22:19:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:19.951 22:19:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:19.951 22:19:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:19.951 22:19:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:20.208 nvme0n1 00:36:20.208 22:19:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:20.208 22:19:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:20.208 22:19:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:20.208 22:19:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:20.208 22:19:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:20.208 22:19:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:20.208 Running I/O for 2 seconds... 00:36:20.208 [2024-07-13 22:19:39.586095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.208 [2024-07-13 22:19:39.586162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.208 [2024-07-13 22:19:39.586223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.466 [2024-07-13 22:19:39.606564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.466 [2024-07-13 22:19:39.606618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.466 [2024-07-13 22:19:39.606667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.466 [2024-07-13 22:19:39.625631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.466 [2024-07-13 22:19:39.625684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.466 [2024-07-13 22:19:39.625732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.466 [2024-07-13 22:19:39.641805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.466 [2024-07-13 22:19:39.641856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.466 [2024-07-13 22:19:39.641925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.466 [2024-07-13 22:19:39.661175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.466 [2024-07-13 22:19:39.661236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.466 [2024-07-13 22:19:39.661283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.466 [2024-07-13 22:19:39.679342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.466 [2024-07-13 22:19:39.679394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.466 [2024-07-13 22:19:39.679441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.466 [2024-07-13 22:19:39.697978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.467 [2024-07-13 22:19:39.698025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.467 [2024-07-13 22:19:39.698067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.467 [2024-07-13 22:19:39.715188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.467 [2024-07-13 22:19:39.715239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.467 [2024-07-13 22:19:39.715297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.467 [2024-07-13 22:19:39.733284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.467 [2024-07-13 22:19:39.733336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.467 [2024-07-13 22:19:39.733383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.467 [2024-07-13 22:19:39.751132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.467 [2024-07-13 22:19:39.751196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.467 [2024-07-13 22:19:39.751242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.467 [2024-07-13 22:19:39.766054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.467 [2024-07-13 22:19:39.766099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.467 [2024-07-13 22:19:39.766143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.467 [2024-07-13 22:19:39.786223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.467 [2024-07-13 22:19:39.786289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.467 [2024-07-13 22:19:39.786337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.467 [2024-07-13 22:19:39.806891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.467 [2024-07-13 22:19:39.806949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.467 [2024-07-13 22:19:39.806988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.467 [2024-07-13 22:19:39.821708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.467 [2024-07-13 22:19:39.821758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.467 [2024-07-13 22:19:39.821805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.467 [2024-07-13 22:19:39.840180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.467 [2024-07-13 22:19:39.840231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.467 [2024-07-13 22:19:39.840277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.467 [2024-07-13 22:19:39.858160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.467 [2024-07-13 22:19:39.858227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.467 [2024-07-13 22:19:39.858274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.725 [2024-07-13 22:19:39.873297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.725 [2024-07-13 22:19:39.873357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.725 [2024-07-13 22:19:39.873406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.725 [2024-07-13 22:19:39.892547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.725 [2024-07-13 22:19:39.892599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.725 [2024-07-13 22:19:39.892647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.725 [2024-07-13 22:19:39.913911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.725 [2024-07-13 22:19:39.913969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.725 [2024-07-13 22:19:39.914008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.725 [2024-07-13 22:19:39.930782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.725 [2024-07-13 22:19:39.930834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.725 [2024-07-13 22:19:39.930889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.725 [2024-07-13 22:19:39.946723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.725 [2024-07-13 22:19:39.946773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.725 [2024-07-13 22:19:39.946820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.725 [2024-07-13 22:19:39.964120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.725 [2024-07-13 22:19:39.964178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.725 [2024-07-13 22:19:39.964238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.725 [2024-07-13 22:19:39.983095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.725 [2024-07-13 22:19:39.983141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.725 [2024-07-13 22:19:39.983205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.725 [2024-07-13 22:19:40.000487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.725 [2024-07-13 22:19:40.000539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.725 [2024-07-13 22:19:40.000586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.725 [2024-07-13 22:19:40.019550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.725 [2024-07-13 22:19:40.019634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.725 [2024-07-13 22:19:40.019701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.725 [2024-07-13 22:19:40.039782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.725 [2024-07-13 22:19:40.039845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.725 [2024-07-13 22:19:40.039925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.725 [2024-07-13 22:19:40.056861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.726 [2024-07-13 22:19:40.056941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.726 [2024-07-13 22:19:40.056984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.726 [2024-07-13 22:19:40.078766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.726 [2024-07-13 22:19:40.078821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.726 [2024-07-13 22:19:40.078885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.726 [2024-07-13 22:19:40.097904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.726 [2024-07-13 22:19:40.097969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.726 [2024-07-13 22:19:40.098008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.726 [2024-07-13 22:19:40.114200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.726 [2024-07-13 22:19:40.114267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.726 [2024-07-13 22:19:40.114314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.984 [2024-07-13 22:19:40.134539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.984 [2024-07-13 22:19:40.134591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.984 [2024-07-13 22:19:40.134638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.984 [2024-07-13 22:19:40.154187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.984 [2024-07-13 22:19:40.154251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.984 [2024-07-13 22:19:40.154299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.984 [2024-07-13 22:19:40.172567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.984 [2024-07-13 22:19:40.172619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.984 [2024-07-13 22:19:40.172665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.984 [2024-07-13 22:19:40.188852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.984 [2024-07-13 22:19:40.188932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.984 [2024-07-13 22:19:40.188976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.984 [2024-07-13 22:19:40.206218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.984 [2024-07-13 22:19:40.206269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.984 [2024-07-13 22:19:40.206318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.984 [2024-07-13 22:19:40.226005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.984 [2024-07-13 22:19:40.226047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.984 [2024-07-13 22:19:40.226086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.984 [2024-07-13 22:19:40.241332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.984 [2024-07-13 22:19:40.241382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.984 [2024-07-13 22:19:40.241430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.984 [2024-07-13 22:19:40.261182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.984 [2024-07-13 22:19:40.261246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.984 [2024-07-13 22:19:40.261294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.984 [2024-07-13 22:19:40.278965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.984 [2024-07-13 22:19:40.279012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.984 [2024-07-13 22:19:40.279054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.984 [2024-07-13 22:19:40.296609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.984 [2024-07-13 22:19:40.296661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.984 [2024-07-13 22:19:40.296708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.984 [2024-07-13 22:19:40.313761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.984 [2024-07-13 22:19:40.313812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.984 [2024-07-13 22:19:40.313860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.984 [2024-07-13 22:19:40.328162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.984 [2024-07-13 22:19:40.328224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.984 [2024-07-13 22:19:40.328280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.984 [2024-07-13 22:19:40.349105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.984 [2024-07-13 22:19:40.349151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.984 [2024-07-13 22:19:40.349193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:20.984 [2024-07-13 22:19:40.366346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:20.984 [2024-07-13 22:19:40.366399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:20.984 [2024-07-13 22:19:40.366445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.244 [2024-07-13 22:19:40.387934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.244 [2024-07-13 22:19:40.387982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.244 [2024-07-13 22:19:40.388024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.244 [2024-07-13 22:19:40.402665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.244 [2024-07-13 22:19:40.402716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.244 [2024-07-13 22:19:40.402763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.244 [2024-07-13 22:19:40.420635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.244 [2024-07-13 22:19:40.420686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.244 [2024-07-13 22:19:40.420733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.244 [2024-07-13 22:19:40.438004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.244 [2024-07-13 22:19:40.438050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.244 [2024-07-13 22:19:40.438093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.244 [2024-07-13 22:19:40.457141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.244 [2024-07-13 22:19:40.457183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.244 [2024-07-13 22:19:40.457242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.244 [2024-07-13 22:19:40.476267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.244 [2024-07-13 22:19:40.476318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.244 [2024-07-13 22:19:40.476367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.244 [2024-07-13 22:19:40.492632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.244 [2024-07-13 22:19:40.492689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.244 [2024-07-13 22:19:40.492736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.244 [2024-07-13 22:19:40.508718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.244 [2024-07-13 22:19:40.508768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.244 [2024-07-13 22:19:40.508815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.244 [2024-07-13 22:19:40.527963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.244 [2024-07-13 22:19:40.528009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.244 [2024-07-13 22:19:40.528051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.244 [2024-07-13 22:19:40.545031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.244 [2024-07-13 22:19:40.545073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.244 [2024-07-13 22:19:40.545114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.244 [2024-07-13 22:19:40.565917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.244 [2024-07-13 22:19:40.565964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.244 [2024-07-13 22:19:40.566005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.244 [2024-07-13 22:19:40.580505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.244 [2024-07-13 22:19:40.580556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.244 [2024-07-13 22:19:40.580603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.244 [2024-07-13 22:19:40.598988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.244 [2024-07-13 22:19:40.599033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.244 [2024-07-13 22:19:40.599073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.244 [2024-07-13 22:19:40.617928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.244 [2024-07-13 22:19:40.617974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.244 [2024-07-13 22:19:40.618016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.244 [2024-07-13 22:19:40.634413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.244 [2024-07-13 22:19:40.634465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.244 [2024-07-13 22:19:40.634513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.504 [2024-07-13 22:19:40.653563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.504 [2024-07-13 22:19:40.653616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.504 [2024-07-13 22:19:40.653662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.504 [2024-07-13 22:19:40.672199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.504 [2024-07-13 22:19:40.672250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.504 [2024-07-13 22:19:40.672297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.504 [2024-07-13 22:19:40.688716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.504 [2024-07-13 22:19:40.688766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.504 [2024-07-13 22:19:40.688814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.504 [2024-07-13 22:19:40.707366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.504 [2024-07-13 22:19:40.707418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.504 [2024-07-13 22:19:40.707466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.504 [2024-07-13 22:19:40.722995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.504 [2024-07-13 22:19:40.723037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.504 [2024-07-13 22:19:40.723075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.504 [2024-07-13 22:19:40.741790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.504 [2024-07-13 22:19:40.741841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.504 [2024-07-13 22:19:40.741913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.504 [2024-07-13 22:19:40.758427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.504 [2024-07-13 22:19:40.758478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.504 [2024-07-13 22:19:40.758526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.504 [2024-07-13 22:19:40.779308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.504 [2024-07-13 22:19:40.779360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.504 [2024-07-13 22:19:40.779407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.504 [2024-07-13 22:19:40.794255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.504 [2024-07-13 22:19:40.794314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.504 [2024-07-13 22:19:40.794362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.504 [2024-07-13 22:19:40.813611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.504 [2024-07-13 22:19:40.813661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.504 [2024-07-13 22:19:40.813708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.504 [2024-07-13 22:19:40.829103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.504 [2024-07-13 22:19:40.829145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.504 [2024-07-13 22:19:40.829201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.504 [2024-07-13 22:19:40.850438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.504 [2024-07-13 22:19:40.850490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.504 [2024-07-13 22:19:40.850537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.504 [2024-07-13 22:19:40.869289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.504 [2024-07-13 22:19:40.869341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.504 [2024-07-13 22:19:40.869388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.504 [2024-07-13 22:19:40.884672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.504 [2024-07-13 22:19:40.884724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.504 [2024-07-13 22:19:40.884772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.762 [2024-07-13 22:19:40.903639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.762 [2024-07-13 22:19:40.903692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.762 [2024-07-13 22:19:40.903739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.762 [2024-07-13 22:19:40.923271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.762 [2024-07-13 22:19:40.923322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.762 [2024-07-13 22:19:40.923370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.762 [2024-07-13 22:19:40.940504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.762 [2024-07-13 22:19:40.940550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.762 [2024-07-13 22:19:40.940593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.762 [2024-07-13 22:19:40.954621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.763 [2024-07-13 22:19:40.954680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.763 [2024-07-13 22:19:40.954723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.763 [2024-07-13 22:19:40.972630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.763 [2024-07-13 22:19:40.972676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.763 [2024-07-13 22:19:40.972719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.763 [2024-07-13 22:19:40.986618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.763 [2024-07-13 22:19:40.986674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.763 [2024-07-13 22:19:40.986713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.763 [2024-07-13 22:19:41.004723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.763 [2024-07-13 22:19:41.004769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.763 [2024-07-13 22:19:41.004812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.763 [2024-07-13 22:19:41.021906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.763 [2024-07-13 22:19:41.021970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.763 [2024-07-13 22:19:41.022012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.763 [2024-07-13 22:19:41.044369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.763 [2024-07-13 22:19:41.044422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.763 [2024-07-13 22:19:41.044469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.763 [2024-07-13 22:19:41.059147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.763 [2024-07-13 22:19:41.059208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.763 [2024-07-13 22:19:41.059255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.763 [2024-07-13 22:19:41.078003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.763 [2024-07-13 22:19:41.078045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.763 [2024-07-13 22:19:41.078084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.763 [2024-07-13 22:19:41.096970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.763 [2024-07-13 22:19:41.097023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.763 [2024-07-13 22:19:41.097067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.763 [2024-07-13 22:19:41.112832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.763 [2024-07-13 22:19:41.112891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.763 [2024-07-13 22:19:41.112948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.763 [2024-07-13 22:19:41.134360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.763 [2024-07-13 22:19:41.134412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.763 [2024-07-13 22:19:41.134460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:21.763 [2024-07-13 22:19:41.152522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:21.763 [2024-07-13 22:19:41.152573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:21.763 [2024-07-13 22:19:41.152621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:22.021 [2024-07-13 22:19:41.168393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:22.021 [2024-07-13 22:19:41.168445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:22.021 [2024-07-13 22:19:41.168493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:22.021 [2024-07-13 22:19:41.186325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:22.021 [2024-07-13 22:19:41.186376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:22.021 [2024-07-13 22:19:41.186426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:22.021 [2024-07-13 22:19:41.204353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:22.021 [2024-07-13 22:19:41.204405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:22.021 [2024-07-13 22:19:41.204452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:22.021 [2024-07-13 22:19:41.222043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:22.021 [2024-07-13 22:19:41.222089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:22.021 [2024-07-13 22:19:41.222144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:22.021 [2024-07-13 22:19:41.239624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:22.021 [2024-07-13 22:19:41.239675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:22.021 [2024-07-13 22:19:41.239723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:22.021 [2024-07-13 22:19:41.257991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:22.021 [2024-07-13 22:19:41.258038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:22.021 [2024-07-13 22:19:41.258080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:22.021 [2024-07-13 22:19:41.274830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:22.021 [2024-07-13 22:19:41.274891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:22.021 [2024-07-13 22:19:41.274951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:22.021 [2024-07-13 22:19:41.290685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:22.021 [2024-07-13 22:19:41.290737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:22.021 [2024-07-13 22:19:41.290784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:22.021 [2024-07-13 22:19:41.309909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:22.021 [2024-07-13 22:19:41.309954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:22.021 [2024-07-13 22:19:41.310013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:22.021 [2024-07-13 22:19:41.329658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:22.021 [2024-07-13 22:19:41.329710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:22.021 [2024-07-13 22:19:41.329757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:22.021 [2024-07-13 22:19:41.347129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:22.021 [2024-07-13 22:19:41.347175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:22.021 [2024-07-13 22:19:41.347235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:22.021 [2024-07-13 22:19:41.361972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:22.021 [2024-07-13 22:19:41.362013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:22.021 [2024-07-13 22:19:41.362052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:22.021 [2024-07-13 22:19:41.380686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:22.021 [2024-07-13 22:19:41.380736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:22.021 [2024-07-13 22:19:41.380782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:22.021 [2024-07-13 22:19:41.401240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:22.021 [2024-07-13 22:19:41.401299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:22.021 [2024-07-13 22:19:41.401346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:22.278 [2024-07-13 22:19:41.417786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:22.278 [2024-07-13 22:19:41.417837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:22.278 [2024-07-13 22:19:41.417908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:22.278 [2024-07-13 22:19:41.435730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:22.278 [2024-07-13 22:19:41.435780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:22.278 [2024-07-13 22:19:41.435828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:22.278 [2024-07-13 22:19:41.452443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:22.278 [2024-07-13 22:19:41.452495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:22.278 [2024-07-13 22:19:41.452545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:22.278 [2024-07-13 22:19:41.471510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:22.278 [2024-07-13 22:19:41.471561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:22.278 [2024-07-13 22:19:41.471608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:22.278 [2024-07-13 22:19:41.489821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:22.278 [2024-07-13 22:19:41.489895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:22.278 [2024-07-13 22:19:41.489956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:22.278 [2024-07-13 22:19:41.507397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:22.278 [2024-07-13 22:19:41.507449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:22.278 [2024-07-13 22:19:41.507506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:22.278 [2024-07-13 22:19:41.527417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:22.279 [2024-07-13 22:19:41.527468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:22.279 [2024-07-13 22:19:41.527515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:22.279 [2024-07-13 22:19:41.544950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:22.279 [2024-07-13 22:19:41.544993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:22.279 [2024-07-13 22:19:41.545049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:22.279 [2024-07-13 22:19:41.562525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:22.279 [2024-07-13 22:19:41.562576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:22.279 [2024-07-13 22:19:41.562623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:22.279 00:36:22.279 Latency(us) 00:36:22.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:22.279 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:22.279 nvme0n1 : 2.01 14193.52 55.44 0.00 0.00 9003.52 4805.97 24855.13 00:36:22.279 =================================================================================================================== 00:36:22.279 Total : 14193.52 55.44 0.00 0.00 9003.52 4805.97 24855.13 00:36:22.279 0 00:36:22.279 22:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:22.279 22:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:22.279 22:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:22.279 22:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:22.279 | .driver_specific 00:36:22.279 | .nvme_error 00:36:22.279 | .status_code 00:36:22.279 | .command_transient_transport_error' 00:36:22.536 22:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 111 > 0 )) 00:36:22.536 22:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 44245 00:36:22.536 22:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 44245 ']' 00:36:22.536 22:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 44245 00:36:22.536 22:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:22.536 22:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:22.536 22:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 44245 00:36:22.536 22:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:22.536 22:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:22.536 22:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 44245' 00:36:22.536 killing process with pid 44245 00:36:22.536 22:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 44245 00:36:22.536 Received shutdown signal, test time was about 2.000000 seconds 00:36:22.536 00:36:22.536 Latency(us) 00:36:22.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:22.536 =================================================================================================================== 00:36:22.536 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:22.536 22:19:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 44245 00:36:23.914 22:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:36:23.914 22:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:23.914 22:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:23.914 22:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:23.914 22:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:23.914 22:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=44902 00:36:23.914 22:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:36:23.914 22:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 44902 /var/tmp/bperf.sock 00:36:23.914 22:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 44902 ']' 00:36:23.914 22:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:23.914 22:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:23.914 22:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:23.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:23.914 22:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:23.914 22:19:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:23.914 [2024-07-13 22:19:42.999853] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:23.914 [2024-07-13 22:19:43.000003] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid44902 ] 00:36:23.914 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:23.914 Zero copy mechanism will not be used. 00:36:23.914 EAL: No free 2048 kB hugepages reported on node 1 00:36:23.914 [2024-07-13 22:19:43.128857] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:24.172 [2024-07-13 22:19:43.389566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:24.738 22:19:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:24.738 22:19:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:24.738 22:19:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:24.738 22:19:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:24.996 22:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:24.996 22:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.996 22:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:24.996 22:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.996 22:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:24.996 22:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:25.254 nvme0n1 00:36:25.254 22:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:25.254 22:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:25.254 22:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:25.254 22:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:25.254 22:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:25.254 22:19:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:25.512 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:25.512 Zero copy mechanism will not be used. 00:36:25.512 Running I/O for 2 seconds... 00:36:25.512 [2024-07-13 22:19:44.761198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.512 [2024-07-13 22:19:44.761291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.512 [2024-07-13 22:19:44.761342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.512 [2024-07-13 22:19:44.774451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.512 [2024-07-13 22:19:44.774505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.512 [2024-07-13 22:19:44.774554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.512 [2024-07-13 22:19:44.787799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.512 [2024-07-13 22:19:44.787852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.512 [2024-07-13 22:19:44.787909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.512 [2024-07-13 22:19:44.801160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.512 [2024-07-13 22:19:44.801214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.512 [2024-07-13 22:19:44.801262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.512 [2024-07-13 22:19:44.814421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.512 [2024-07-13 22:19:44.814474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.512 [2024-07-13 22:19:44.814523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.512 [2024-07-13 22:19:44.827661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.512 [2024-07-13 22:19:44.827714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.512 [2024-07-13 22:19:44.827763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.512 [2024-07-13 22:19:44.842341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.512 [2024-07-13 22:19:44.842397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.512 [2024-07-13 22:19:44.842444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.512 [2024-07-13 22:19:44.856573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.512 [2024-07-13 22:19:44.856629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.512 [2024-07-13 22:19:44.856680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.512 [2024-07-13 22:19:44.871313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.512 [2024-07-13 22:19:44.871376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.512 [2024-07-13 22:19:44.871426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.512 [2024-07-13 22:19:44.885679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.512 [2024-07-13 22:19:44.885733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.512 [2024-07-13 22:19:44.885781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.512 [2024-07-13 22:19:44.900253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.512 [2024-07-13 22:19:44.900307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.513 [2024-07-13 22:19:44.900354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.772 [2024-07-13 22:19:44.914517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.772 [2024-07-13 22:19:44.914597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.772 [2024-07-13 22:19:44.914647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.772 [2024-07-13 22:19:44.928335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.772 [2024-07-13 22:19:44.928389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.773 [2024-07-13 22:19:44.928437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.773 [2024-07-13 22:19:44.943279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.773 [2024-07-13 22:19:44.943334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.773 [2024-07-13 22:19:44.943382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.773 [2024-07-13 22:19:44.957423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.773 [2024-07-13 22:19:44.957477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.773 [2024-07-13 22:19:44.957525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.773 [2024-07-13 22:19:44.971025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.773 [2024-07-13 22:19:44.971079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.773 [2024-07-13 22:19:44.971127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.773 [2024-07-13 22:19:44.986006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.773 [2024-07-13 22:19:44.986060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.773 [2024-07-13 22:19:44.986109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.773 [2024-07-13 22:19:45.000428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.773 [2024-07-13 22:19:45.000482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.773 [2024-07-13 22:19:45.000532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.773 [2024-07-13 22:19:45.014756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.773 [2024-07-13 22:19:45.014809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.773 [2024-07-13 22:19:45.014858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.773 [2024-07-13 22:19:45.030291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.773 [2024-07-13 22:19:45.030345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.773 [2024-07-13 22:19:45.030393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.773 [2024-07-13 22:19:45.044565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.773 [2024-07-13 22:19:45.044619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.773 [2024-07-13 22:19:45.044668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.773 [2024-07-13 22:19:45.058588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.773 [2024-07-13 22:19:45.058641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.773 [2024-07-13 22:19:45.058689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.773 [2024-07-13 22:19:45.072524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.773 [2024-07-13 22:19:45.072578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.773 [2024-07-13 22:19:45.072625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.773 [2024-07-13 22:19:45.086391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.773 [2024-07-13 22:19:45.086445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.773 [2024-07-13 22:19:45.086493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.773 [2024-07-13 22:19:45.100980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.773 [2024-07-13 22:19:45.101034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.773 [2024-07-13 22:19:45.101083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:25.773 [2024-07-13 22:19:45.114998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.773 [2024-07-13 22:19:45.115065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.773 [2024-07-13 22:19:45.115114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:25.773 [2024-07-13 22:19:45.130047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.773 [2024-07-13 22:19:45.130101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.773 [2024-07-13 22:19:45.130150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:25.773 [2024-07-13 22:19:45.144138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.773 [2024-07-13 22:19:45.144192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.773 [2024-07-13 22:19:45.144240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:25.773 [2024-07-13 22:19:45.158509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:25.773 [2024-07-13 22:19:45.158563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:25.773 [2024-07-13 22:19:45.158611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:26.033 [2024-07-13 22:19:45.173643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.033 [2024-07-13 22:19:45.173697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.033 [2024-07-13 22:19:45.173745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:26.033 [2024-07-13 22:19:45.190595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.033 [2024-07-13 22:19:45.190649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.033 [2024-07-13 22:19:45.190696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:26.033 [2024-07-13 22:19:45.205748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.033 [2024-07-13 22:19:45.205803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.033 [2024-07-13 22:19:45.205851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.033 [2024-07-13 22:19:45.220242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.033 [2024-07-13 22:19:45.220296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.033 [2024-07-13 22:19:45.220344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:26.033 [2024-07-13 22:19:45.236161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.033 [2024-07-13 22:19:45.236215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.033 [2024-07-13 22:19:45.236264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:26.034 [2024-07-13 22:19:45.252361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.034 [2024-07-13 22:19:45.252418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.034 [2024-07-13 22:19:45.252466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:26.034 [2024-07-13 22:19:45.269109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.034 [2024-07-13 22:19:45.269163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.034 [2024-07-13 22:19:45.269220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.034 [2024-07-13 22:19:45.283616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.034 [2024-07-13 22:19:45.283671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.034 [2024-07-13 22:19:45.283720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:26.034 [2024-07-13 22:19:45.298426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.034 [2024-07-13 22:19:45.298479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.034 [2024-07-13 22:19:45.298527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:26.034 [2024-07-13 22:19:45.312482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.034 [2024-07-13 22:19:45.312538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.034 [2024-07-13 22:19:45.312588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:26.034 [2024-07-13 22:19:45.326823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.034 [2024-07-13 22:19:45.326887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.034 [2024-07-13 22:19:45.326941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.034 [2024-07-13 22:19:45.340717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.034 [2024-07-13 22:19:45.340770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.034 [2024-07-13 22:19:45.340819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:26.034 [2024-07-13 22:19:45.356047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.034 [2024-07-13 22:19:45.356101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.034 [2024-07-13 22:19:45.356150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:26.034 [2024-07-13 22:19:45.371541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.034 [2024-07-13 22:19:45.371603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.034 [2024-07-13 22:19:45.371652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:26.034 [2024-07-13 22:19:45.387414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.034 [2024-07-13 22:19:45.387469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.034 [2024-07-13 22:19:45.387517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.034 [2024-07-13 22:19:45.403115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.034 [2024-07-13 22:19:45.403169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.034 [2024-07-13 22:19:45.403217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:26.034 [2024-07-13 22:19:45.418947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.034 [2024-07-13 22:19:45.419002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.034 [2024-07-13 22:19:45.419050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:26.293 [2024-07-13 22:19:45.435106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.293 [2024-07-13 22:19:45.435161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.293 [2024-07-13 22:19:45.435210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:26.293 [2024-07-13 22:19:45.450516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.293 [2024-07-13 22:19:45.450570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.293 [2024-07-13 22:19:45.450617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.293 [2024-07-13 22:19:45.464768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.293 [2024-07-13 22:19:45.464821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.293 [2024-07-13 22:19:45.464879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:26.293 [2024-07-13 22:19:45.479260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.293 [2024-07-13 22:19:45.479314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.293 [2024-07-13 22:19:45.479362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:26.293 [2024-07-13 22:19:45.493153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.293 [2024-07-13 22:19:45.493205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.293 [2024-07-13 22:19:45.493252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:26.293 [2024-07-13 22:19:45.507662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.293 [2024-07-13 22:19:45.507715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.293 [2024-07-13 22:19:45.507764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.293 [2024-07-13 22:19:45.521554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.293 [2024-07-13 22:19:45.521607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.293 [2024-07-13 22:19:45.521654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:26.293 [2024-07-13 22:19:45.535463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.293 [2024-07-13 22:19:45.535517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.293 [2024-07-13 22:19:45.535564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:26.293 [2024-07-13 22:19:45.548779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.293 [2024-07-13 22:19:45.548832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.293 [2024-07-13 22:19:45.548888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:26.293 [2024-07-13 22:19:45.562054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.293 [2024-07-13 22:19:45.562105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.293 [2024-07-13 22:19:45.562154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.293 [2024-07-13 22:19:45.575363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.293 [2024-07-13 22:19:45.575417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.293 [2024-07-13 22:19:45.575464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:26.293 [2024-07-13 22:19:45.588565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.293 [2024-07-13 22:19:45.588618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.293 [2024-07-13 22:19:45.588668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:26.293 [2024-07-13 22:19:45.602060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.293 [2024-07-13 22:19:45.602123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.293 [2024-07-13 22:19:45.602171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:26.293 [2024-07-13 22:19:45.615404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.293 [2024-07-13 22:19:45.615464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.293 [2024-07-13 22:19:45.615513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.293 [2024-07-13 22:19:45.628698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.293 [2024-07-13 22:19:45.628750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.293 [2024-07-13 22:19:45.628799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:26.293 [2024-07-13 22:19:45.642197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.293 [2024-07-13 22:19:45.642249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.294 [2024-07-13 22:19:45.642297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:26.294 [2024-07-13 22:19:45.655500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.294 [2024-07-13 22:19:45.655551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.294 [2024-07-13 22:19:45.655599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:26.294 [2024-07-13 22:19:45.668711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.294 [2024-07-13 22:19:45.668763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.294 [2024-07-13 22:19:45.668811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.294 [2024-07-13 22:19:45.681768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.294 [2024-07-13 22:19:45.681821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.294 [2024-07-13 22:19:45.681882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:26.552 [2024-07-13 22:19:45.695238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.552 [2024-07-13 22:19:45.695291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.552 [2024-07-13 22:19:45.695339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:26.553 [2024-07-13 22:19:45.708556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.553 [2024-07-13 22:19:45.708610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.553 [2024-07-13 22:19:45.708659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:26.553 [2024-07-13 22:19:45.721687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.553 [2024-07-13 22:19:45.721740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.553 [2024-07-13 22:19:45.721788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.553 [2024-07-13 22:19:45.735052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.553 [2024-07-13 22:19:45.735104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.553 [2024-07-13 22:19:45.735163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:26.553 [2024-07-13 22:19:45.748558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.553 [2024-07-13 22:19:45.748610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.553 [2024-07-13 22:19:45.748658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:26.553 [2024-07-13 22:19:45.761758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.553 [2024-07-13 22:19:45.761811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.553 [2024-07-13 22:19:45.761861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:26.553 [2024-07-13 22:19:45.775161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.553 [2024-07-13 22:19:45.775214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.553 [2024-07-13 22:19:45.775262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.553 [2024-07-13 22:19:45.788922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.553 [2024-07-13 22:19:45.788975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.553 [2024-07-13 22:19:45.789024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:26.553 [2024-07-13 22:19:45.802345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.553 [2024-07-13 22:19:45.802398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.553 [2024-07-13 22:19:45.802446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:26.553 [2024-07-13 22:19:45.815665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.553 [2024-07-13 22:19:45.815718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.553 [2024-07-13 22:19:45.815766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:26.553 [2024-07-13 22:19:45.829007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.553 [2024-07-13 22:19:45.829059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.553 [2024-07-13 22:19:45.829108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.553 [2024-07-13 22:19:45.842464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.553 [2024-07-13 22:19:45.842526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.553 [2024-07-13 22:19:45.842574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:26.553 [2024-07-13 22:19:45.855858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.553 [2024-07-13 22:19:45.855919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.553 [2024-07-13 22:19:45.855967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:26.553 [2024-07-13 22:19:45.869158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.553 [2024-07-13 22:19:45.869210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.553 [2024-07-13 22:19:45.869258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:26.553 [2024-07-13 22:19:45.882354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.553 [2024-07-13 22:19:45.882406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.553 [2024-07-13 22:19:45.882454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.553 [2024-07-13 22:19:45.895528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.553 [2024-07-13 22:19:45.895581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.553 [2024-07-13 22:19:45.895629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:26.553 [2024-07-13 22:19:45.908814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.553 [2024-07-13 22:19:45.908875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.553 [2024-07-13 22:19:45.908923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:26.553 [2024-07-13 22:19:45.922166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.553 [2024-07-13 22:19:45.922220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.553 [2024-07-13 22:19:45.922267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:26.553 [2024-07-13 22:19:45.935447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.553 [2024-07-13 22:19:45.935499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.553 [2024-07-13 22:19:45.935547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.812 [2024-07-13 22:19:45.948929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.812 [2024-07-13 22:19:45.948983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.812 [2024-07-13 22:19:45.949031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:26.812 [2024-07-13 22:19:45.962389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.812 [2024-07-13 22:19:45.962441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.812 [2024-07-13 22:19:45.962491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:26.812 [2024-07-13 22:19:45.975669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.812 [2024-07-13 22:19:45.975720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.812 [2024-07-13 22:19:45.975768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:26.812 [2024-07-13 22:19:45.988903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.812 [2024-07-13 22:19:45.988955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.812 [2024-07-13 22:19:45.989003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.812 [2024-07-13 22:19:46.002263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.812 [2024-07-13 22:19:46.002325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.812 [2024-07-13 22:19:46.002373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:26.812 [2024-07-13 22:19:46.015773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.812 [2024-07-13 22:19:46.015837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.812 [2024-07-13 22:19:46.015899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:26.812 [2024-07-13 22:19:46.029428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.812 [2024-07-13 22:19:46.029481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.812 [2024-07-13 22:19:46.029527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:26.812 [2024-07-13 22:19:46.043081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.812 [2024-07-13 22:19:46.043135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.812 [2024-07-13 22:19:46.043190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.812 [2024-07-13 22:19:46.056616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.812 [2024-07-13 22:19:46.056668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.812 [2024-07-13 22:19:46.056716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:26.812 [2024-07-13 22:19:46.069803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.812 [2024-07-13 22:19:46.069880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.812 [2024-07-13 22:19:46.069930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:26.812 [2024-07-13 22:19:46.083037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.812 [2024-07-13 22:19:46.083089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.812 [2024-07-13 22:19:46.083138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:26.812 [2024-07-13 22:19:46.096608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.812 [2024-07-13 22:19:46.096670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.812 [2024-07-13 22:19:46.096720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.812 [2024-07-13 22:19:46.110086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.812 [2024-07-13 22:19:46.110138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.812 [2024-07-13 22:19:46.110213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:26.812 [2024-07-13 22:19:46.123238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.812 [2024-07-13 22:19:46.123289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.812 [2024-07-13 22:19:46.123338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:26.812 [2024-07-13 22:19:46.136730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.812 [2024-07-13 22:19:46.136783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.812 [2024-07-13 22:19:46.136832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:26.812 [2024-07-13 22:19:46.150137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.812 [2024-07-13 22:19:46.150191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.812 [2024-07-13 22:19:46.150239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:26.812 [2024-07-13 22:19:46.163300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.812 [2024-07-13 22:19:46.163351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.812 [2024-07-13 22:19:46.163399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:26.812 [2024-07-13 22:19:46.176461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.812 [2024-07-13 22:19:46.176513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.813 [2024-07-13 22:19:46.176561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:26.813 [2024-07-13 22:19:46.189622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.813 [2024-07-13 22:19:46.189674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.813 [2024-07-13 22:19:46.189723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:26.813 [2024-07-13 22:19:46.202920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:26.813 [2024-07-13 22:19:46.202973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.813 [2024-07-13 22:19:46.203020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.071 [2024-07-13 22:19:46.216318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.071 [2024-07-13 22:19:46.216372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.071 [2024-07-13 22:19:46.216420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:27.071 [2024-07-13 22:19:46.229787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.071 [2024-07-13 22:19:46.229839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.071 [2024-07-13 22:19:46.229896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:27.071 [2024-07-13 22:19:46.243135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.071 [2024-07-13 22:19:46.243188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.071 [2024-07-13 22:19:46.243236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:27.071 [2024-07-13 22:19:46.256278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.071 [2024-07-13 22:19:46.256330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.071 [2024-07-13 22:19:46.256378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.071 [2024-07-13 22:19:46.269517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.071 [2024-07-13 22:19:46.269570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.071 [2024-07-13 22:19:46.269618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:27.071 [2024-07-13 22:19:46.282795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.071 [2024-07-13 22:19:46.282847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.071 [2024-07-13 22:19:46.282913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:27.071 [2024-07-13 22:19:46.296212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.071 [2024-07-13 22:19:46.296273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.071 [2024-07-13 22:19:46.296321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:27.071 [2024-07-13 22:19:46.309484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.071 [2024-07-13 22:19:46.309535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.071 [2024-07-13 22:19:46.309583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.071 [2024-07-13 22:19:46.322649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.071 [2024-07-13 22:19:46.322701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.071 [2024-07-13 22:19:46.322749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:27.071 [2024-07-13 22:19:46.335902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.071 [2024-07-13 22:19:46.335970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.071 [2024-07-13 22:19:46.336014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:27.071 [2024-07-13 22:19:46.348658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.071 [2024-07-13 22:19:46.348710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.071 [2024-07-13 22:19:46.348769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:27.071 [2024-07-13 22:19:46.361663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.071 [2024-07-13 22:19:46.361714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.071 [2024-07-13 22:19:46.361763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.071 [2024-07-13 22:19:46.374954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.071 [2024-07-13 22:19:46.375001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.071 [2024-07-13 22:19:46.375042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:27.071 [2024-07-13 22:19:46.387955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.071 [2024-07-13 22:19:46.388002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.071 [2024-07-13 22:19:46.388044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:27.071 [2024-07-13 22:19:46.400718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.071 [2024-07-13 22:19:46.400769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.071 [2024-07-13 22:19:46.400817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:27.071 [2024-07-13 22:19:46.413451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.071 [2024-07-13 22:19:46.413504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.071 [2024-07-13 22:19:46.413551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.071 [2024-07-13 22:19:46.426238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.071 [2024-07-13 22:19:46.426291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.071 [2024-07-13 22:19:46.426339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:27.071 [2024-07-13 22:19:46.439134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.071 [2024-07-13 22:19:46.439193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.071 [2024-07-13 22:19:46.439250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:27.071 [2024-07-13 22:19:46.451965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.071 [2024-07-13 22:19:46.452011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.071 [2024-07-13 22:19:46.452055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:27.328 [2024-07-13 22:19:46.464834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.328 [2024-07-13 22:19:46.464910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.328 [2024-07-13 22:19:46.464954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.328 [2024-07-13 22:19:46.477881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.328 [2024-07-13 22:19:46.477947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.328 [2024-07-13 22:19:46.477988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:27.328 [2024-07-13 22:19:46.490447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.328 [2024-07-13 22:19:46.490499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.328 [2024-07-13 22:19:46.490548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:27.328 [2024-07-13 22:19:46.503359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.328 [2024-07-13 22:19:46.503411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.328 [2024-07-13 22:19:46.503459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:27.328 [2024-07-13 22:19:46.516067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.328 [2024-07-13 22:19:46.516123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.328 [2024-07-13 22:19:46.516180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.328 [2024-07-13 22:19:46.529129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.328 [2024-07-13 22:19:46.529189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.328 [2024-07-13 22:19:46.529249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:27.328 [2024-07-13 22:19:46.542054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.328 [2024-07-13 22:19:46.542099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.328 [2024-07-13 22:19:46.542146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:27.328 [2024-07-13 22:19:46.555181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.329 [2024-07-13 22:19:46.555235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.329 [2024-07-13 22:19:46.555282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:27.329 [2024-07-13 22:19:46.567920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.329 [2024-07-13 22:19:46.567971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.329 [2024-07-13 22:19:46.568019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.329 [2024-07-13 22:19:46.581186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.329 [2024-07-13 22:19:46.581238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.329 [2024-07-13 22:19:46.581285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:27.329 [2024-07-13 22:19:46.594153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.329 [2024-07-13 22:19:46.594220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.329 [2024-07-13 22:19:46.594269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:27.329 [2024-07-13 22:19:46.607093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.329 [2024-07-13 22:19:46.607140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.329 [2024-07-13 22:19:46.607201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:27.329 [2024-07-13 22:19:46.620008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.329 [2024-07-13 22:19:46.620054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.329 [2024-07-13 22:19:46.620096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.329 [2024-07-13 22:19:46.632657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.329 [2024-07-13 22:19:46.632709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.329 [2024-07-13 22:19:46.632758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:27.329 [2024-07-13 22:19:46.645524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.329 [2024-07-13 22:19:46.645575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.329 [2024-07-13 22:19:46.645623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:27.329 [2024-07-13 22:19:46.658328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.329 [2024-07-13 22:19:46.658380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.329 [2024-07-13 22:19:46.658427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:27.329 [2024-07-13 22:19:46.671166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.329 [2024-07-13 22:19:46.671239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.329 [2024-07-13 22:19:46.671296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.329 [2024-07-13 22:19:46.684120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.329 [2024-07-13 22:19:46.684180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.329 [2024-07-13 22:19:46.684228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:27.329 [2024-07-13 22:19:46.697066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.329 [2024-07-13 22:19:46.697112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.329 [2024-07-13 22:19:46.697155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:27.329 [2024-07-13 22:19:46.709855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.329 [2024-07-13 22:19:46.709917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.329 [2024-07-13 22:19:46.709973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:27.587 [2024-07-13 22:19:46.722682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.587 [2024-07-13 22:19:46.722734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.587 [2024-07-13 22:19:46.722782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:27.587 [2024-07-13 22:19:46.735554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.587 [2024-07-13 22:19:46.735615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.587 [2024-07-13 22:19:46.735664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:27.587 [2024-07-13 22:19:46.748430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150001f2a00) 00:36:27.587 [2024-07-13 22:19:46.748482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:27.587 [2024-07-13 22:19:46.748530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:27.587 00:36:27.587 Latency(us) 00:36:27.587 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:27.587 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:27.587 nvme0n1 : 2.00 2267.41 283.43 0.00 0.00 7049.55 6092.42 16893.72 00:36:27.587 =================================================================================================================== 00:36:27.587 Total : 2267.41 283.43 0.00 0.00 7049.55 6092.42 16893.72 00:36:27.587 0 00:36:27.587 22:19:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:27.587 22:19:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:27.587 22:19:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:27.587 | .driver_specific 00:36:27.587 | .nvme_error 00:36:27.587 | .status_code 00:36:27.587 | .command_transient_transport_error' 00:36:27.587 22:19:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:27.845 22:19:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 146 > 0 )) 00:36:27.845 22:19:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 44902 00:36:27.845 22:19:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 44902 ']' 00:36:27.845 22:19:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 44902 00:36:27.845 22:19:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:27.845 22:19:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:27.845 22:19:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 44902 00:36:27.845 22:19:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:27.845 22:19:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:27.845 22:19:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 44902' 00:36:27.845 killing process with pid 44902 00:36:27.845 22:19:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 44902 00:36:27.845 Received shutdown signal, test time was about 2.000000 seconds 00:36:27.845 00:36:27.845 Latency(us) 00:36:27.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:27.845 =================================================================================================================== 00:36:27.845 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:27.845 22:19:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 44902 00:36:28.784 22:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:36:28.784 22:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:28.784 22:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:28.784 22:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:28.784 22:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:28.784 22:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=45450 00:36:28.784 22:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:36:28.784 22:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 45450 /var/tmp/bperf.sock 00:36:28.784 22:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 45450 ']' 00:36:28.784 22:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:28.784 22:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:28.784 22:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:28.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:28.784 22:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:28.784 22:19:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:29.042 [2024-07-13 22:19:48.225047] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:29.042 [2024-07-13 22:19:48.225183] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid45450 ] 00:36:29.042 EAL: No free 2048 kB hugepages reported on node 1 00:36:29.042 [2024-07-13 22:19:48.363462] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:29.300 [2024-07-13 22:19:48.622326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:29.867 22:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:29.867 22:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:29.867 22:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:29.867 22:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:30.125 22:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:30.125 22:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.126 22:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:30.126 22:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.126 22:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:30.126 22:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:30.694 nvme0n1 00:36:30.694 22:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:30.694 22:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.694 22:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:30.694 22:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.694 22:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:30.694 22:19:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:30.694 Running I/O for 2 seconds... 00:36:30.954 [2024-07-13 22:19:50.102641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:30.954 [2024-07-13 22:19:50.103063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.954 [2024-07-13 22:19:50.103124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:30.954 [2024-07-13 22:19:50.123081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:30.954 [2024-07-13 22:19:50.123458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.954 [2024-07-13 22:19:50.123506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:30.954 [2024-07-13 22:19:50.143400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:30.954 [2024-07-13 22:19:50.143757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.954 [2024-07-13 22:19:50.143804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:30.954 [2024-07-13 22:19:50.163862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:30.954 [2024-07-13 22:19:50.164245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.954 [2024-07-13 22:19:50.164291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:30.954 [2024-07-13 22:19:50.184109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:30.954 [2024-07-13 22:19:50.184498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.954 [2024-07-13 22:19:50.184545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:30.954 [2024-07-13 22:19:50.204526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:30.954 [2024-07-13 22:19:50.204890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.954 [2024-07-13 22:19:50.204948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:30.954 [2024-07-13 22:19:50.224644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:30.954 [2024-07-13 22:19:50.224998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.954 [2024-07-13 22:19:50.225038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:30.954 [2024-07-13 22:19:50.244651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:30.954 [2024-07-13 22:19:50.244998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.954 [2024-07-13 22:19:50.245037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:30.954 [2024-07-13 22:19:50.264617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:30.954 [2024-07-13 22:19:50.264992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.955 [2024-07-13 22:19:50.265039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:30.955 [2024-07-13 22:19:50.284728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:30.955 [2024-07-13 22:19:50.285090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.955 [2024-07-13 22:19:50.285129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:30.955 [2024-07-13 22:19:50.304841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:30.955 [2024-07-13 22:19:50.305238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.955 [2024-07-13 22:19:50.305284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:30.955 [2024-07-13 22:19:50.324847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:30.955 [2024-07-13 22:19:50.325226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.955 [2024-07-13 22:19:50.325273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:30.955 [2024-07-13 22:19:50.344746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:30.955 [2024-07-13 22:19:50.345115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:30.955 [2024-07-13 22:19:50.345168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.264 [2024-07-13 22:19:50.367513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.264 [2024-07-13 22:19:50.367903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.264 [2024-07-13 22:19:50.367950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.264 [2024-07-13 22:19:50.387671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.264 [2024-07-13 22:19:50.388050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.264 [2024-07-13 22:19:50.388097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.264 [2024-07-13 22:19:50.407822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.264 [2024-07-13 22:19:50.408201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.264 [2024-07-13 22:19:50.408247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.264 [2024-07-13 22:19:50.427862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.264 [2024-07-13 22:19:50.428238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.264 [2024-07-13 22:19:50.428284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.264 [2024-07-13 22:19:50.447647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.264 [2024-07-13 22:19:50.448004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.264 [2024-07-13 22:19:50.448050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.264 [2024-07-13 22:19:50.467513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.264 [2024-07-13 22:19:50.467880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.264 [2024-07-13 22:19:50.467926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.264 [2024-07-13 22:19:50.487103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.264 [2024-07-13 22:19:50.487459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.264 [2024-07-13 22:19:50.487504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.264 [2024-07-13 22:19:50.506819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.264 [2024-07-13 22:19:50.507184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.264 [2024-07-13 22:19:50.507231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.264 [2024-07-13 22:19:50.526356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.264 [2024-07-13 22:19:50.526714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.264 [2024-07-13 22:19:50.526760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.264 [2024-07-13 22:19:50.545950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.264 [2024-07-13 22:19:50.546301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.264 [2024-07-13 22:19:50.546347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.264 [2024-07-13 22:19:50.565888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.264 [2024-07-13 22:19:50.566252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.264 [2024-07-13 22:19:50.566300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.264 [2024-07-13 22:19:50.585712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.265 [2024-07-13 22:19:50.586081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.265 [2024-07-13 22:19:50.586128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.265 [2024-07-13 22:19:50.605601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.265 [2024-07-13 22:19:50.605975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.265 [2024-07-13 22:19:50.606028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.265 [2024-07-13 22:19:50.625679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.265 [2024-07-13 22:19:50.626089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.265 [2024-07-13 22:19:50.626136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.523 [2024-07-13 22:19:50.646511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.523 [2024-07-13 22:19:50.646881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.523 [2024-07-13 22:19:50.646928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.523 [2024-07-13 22:19:50.667007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.523 [2024-07-13 22:19:50.667374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.523 [2024-07-13 22:19:50.667420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.523 [2024-07-13 22:19:50.687375] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.523 [2024-07-13 22:19:50.687729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.523 [2024-07-13 22:19:50.687775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.523 [2024-07-13 22:19:50.707424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.523 [2024-07-13 22:19:50.707782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.523 [2024-07-13 22:19:50.707828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.523 [2024-07-13 22:19:50.727182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.523 [2024-07-13 22:19:50.727539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.523 [2024-07-13 22:19:50.727586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.524 [2024-07-13 22:19:50.746882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.524 [2024-07-13 22:19:50.747239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.524 [2024-07-13 22:19:50.747285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.524 [2024-07-13 22:19:50.766500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.524 [2024-07-13 22:19:50.766859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.524 [2024-07-13 22:19:50.766914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.524 [2024-07-13 22:19:50.786075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.524 [2024-07-13 22:19:50.786431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.524 [2024-07-13 22:19:50.786477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.524 [2024-07-13 22:19:50.805676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.524 [2024-07-13 22:19:50.806038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.524 [2024-07-13 22:19:50.806084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.524 [2024-07-13 22:19:50.825182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.524 [2024-07-13 22:19:50.825533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.524 [2024-07-13 22:19:50.825579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.524 [2024-07-13 22:19:50.844902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.524 [2024-07-13 22:19:50.845262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.524 [2024-07-13 22:19:50.845307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.524 [2024-07-13 22:19:50.864408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.524 [2024-07-13 22:19:50.864764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.524 [2024-07-13 22:19:50.864810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.524 [2024-07-13 22:19:50.883848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.524 [2024-07-13 22:19:50.884225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.524 [2024-07-13 22:19:50.884271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.524 [2024-07-13 22:19:50.903380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.524 [2024-07-13 22:19:50.903734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.524 [2024-07-13 22:19:50.903780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.784 [2024-07-13 22:19:50.923361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.784 [2024-07-13 22:19:50.923724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.784 [2024-07-13 22:19:50.923770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.784 [2024-07-13 22:19:50.942999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.784 [2024-07-13 22:19:50.943356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.784 [2024-07-13 22:19:50.943408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.784 [2024-07-13 22:19:50.962480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.784 [2024-07-13 22:19:50.962835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.784 [2024-07-13 22:19:50.962891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.784 [2024-07-13 22:19:50.982020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.784 [2024-07-13 22:19:50.982370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.784 [2024-07-13 22:19:50.982416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.784 [2024-07-13 22:19:51.001495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.784 [2024-07-13 22:19:51.001846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.784 [2024-07-13 22:19:51.001903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.784 [2024-07-13 22:19:51.021062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.784 [2024-07-13 22:19:51.021415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.784 [2024-07-13 22:19:51.021460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.784 [2024-07-13 22:19:51.040531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.784 [2024-07-13 22:19:51.040893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.784 [2024-07-13 22:19:51.040939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.784 [2024-07-13 22:19:51.060090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.784 [2024-07-13 22:19:51.060443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.784 [2024-07-13 22:19:51.060489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.784 [2024-07-13 22:19:51.079507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.784 [2024-07-13 22:19:51.079861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.784 [2024-07-13 22:19:51.079915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.784 [2024-07-13 22:19:51.099022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.784 [2024-07-13 22:19:51.099374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.784 [2024-07-13 22:19:51.099420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.784 [2024-07-13 22:19:51.118456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.784 [2024-07-13 22:19:51.118824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.784 [2024-07-13 22:19:51.118885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.784 [2024-07-13 22:19:51.137967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.784 [2024-07-13 22:19:51.138327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.784 [2024-07-13 22:19:51.138373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:31.784 [2024-07-13 22:19:51.157509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:31.784 [2024-07-13 22:19:51.157861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:31.784 [2024-07-13 22:19:51.157915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.045 [2024-07-13 22:19:51.177414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.045 [2024-07-13 22:19:51.177771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.045 [2024-07-13 22:19:51.177818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.045 [2024-07-13 22:19:51.197292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.045 [2024-07-13 22:19:51.197645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.045 [2024-07-13 22:19:51.197690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.045 [2024-07-13 22:19:51.216778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.045 [2024-07-13 22:19:51.217152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.045 [2024-07-13 22:19:51.217199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.045 [2024-07-13 22:19:51.236380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.045 [2024-07-13 22:19:51.236734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.045 [2024-07-13 22:19:51.236781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.045 [2024-07-13 22:19:51.255849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.045 [2024-07-13 22:19:51.256224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.045 [2024-07-13 22:19:51.256271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.045 [2024-07-13 22:19:51.275586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.045 [2024-07-13 22:19:51.275943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.045 [2024-07-13 22:19:51.275989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.045 [2024-07-13 22:19:51.295216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.045 [2024-07-13 22:19:51.295569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.045 [2024-07-13 22:19:51.295616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.045 [2024-07-13 22:19:51.314857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.045 [2024-07-13 22:19:51.315221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.045 [2024-07-13 22:19:51.315266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.045 [2024-07-13 22:19:51.334347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.045 [2024-07-13 22:19:51.334698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.045 [2024-07-13 22:19:51.334744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.045 [2024-07-13 22:19:51.353952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.045 [2024-07-13 22:19:51.354308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.045 [2024-07-13 22:19:51.354353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.045 [2024-07-13 22:19:51.373466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.045 [2024-07-13 22:19:51.373818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.045 [2024-07-13 22:19:51.373864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.045 [2024-07-13 22:19:51.393077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.045 [2024-07-13 22:19:51.393443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.045 [2024-07-13 22:19:51.393490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.045 [2024-07-13 22:19:51.412804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.045 [2024-07-13 22:19:51.413167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.045 [2024-07-13 22:19:51.413214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.045 [2024-07-13 22:19:51.432474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.045 [2024-07-13 22:19:51.432831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.046 [2024-07-13 22:19:51.432885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.305 [2024-07-13 22:19:51.453333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.305 [2024-07-13 22:19:51.453694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.305 [2024-07-13 22:19:51.453747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.305 [2024-07-13 22:19:51.473215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.305 [2024-07-13 22:19:51.473574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.305 [2024-07-13 22:19:51.473619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.305 [2024-07-13 22:19:51.492925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.305 [2024-07-13 22:19:51.493280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.305 [2024-07-13 22:19:51.493325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.305 [2024-07-13 22:19:51.512525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.305 [2024-07-13 22:19:51.512892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.305 [2024-07-13 22:19:51.512941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.305 [2024-07-13 22:19:51.532303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.305 [2024-07-13 22:19:51.532661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.305 [2024-07-13 22:19:51.532706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.305 [2024-07-13 22:19:51.551923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.305 [2024-07-13 22:19:51.552285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.305 [2024-07-13 22:19:51.552330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.305 [2024-07-13 22:19:51.572053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.305 [2024-07-13 22:19:51.572407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.305 [2024-07-13 22:19:51.572452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.305 [2024-07-13 22:19:51.591737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.305 [2024-07-13 22:19:51.592105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.305 [2024-07-13 22:19:51.592150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.305 [2024-07-13 22:19:51.611343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.305 [2024-07-13 22:19:51.611695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.305 [2024-07-13 22:19:51.611741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.305 [2024-07-13 22:19:51.630876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.305 [2024-07-13 22:19:51.631232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.305 [2024-07-13 22:19:51.631278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.305 [2024-07-13 22:19:51.650388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.305 [2024-07-13 22:19:51.650741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.305 [2024-07-13 22:19:51.650787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.305 [2024-07-13 22:19:51.669809] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.305 [2024-07-13 22:19:51.670179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.305 [2024-07-13 22:19:51.670226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.305 [2024-07-13 22:19:51.689399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.305 [2024-07-13 22:19:51.689754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.305 [2024-07-13 22:19:51.689800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.566 [2024-07-13 22:19:51.709569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.566 [2024-07-13 22:19:51.709924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.566 [2024-07-13 22:19:51.709970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.566 [2024-07-13 22:19:51.729345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.566 [2024-07-13 22:19:51.729703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.566 [2024-07-13 22:19:51.729749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.566 [2024-07-13 22:19:51.749061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.566 [2024-07-13 22:19:51.749416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.566 [2024-07-13 22:19:51.749462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.566 [2024-07-13 22:19:51.768652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.566 [2024-07-13 22:19:51.769014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.566 [2024-07-13 22:19:51.769061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.566 [2024-07-13 22:19:51.788389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.566 [2024-07-13 22:19:51.788742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.566 [2024-07-13 22:19:51.788795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.566 [2024-07-13 22:19:51.808012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.566 [2024-07-13 22:19:51.808364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.566 [2024-07-13 22:19:51.808409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.566 [2024-07-13 22:19:51.827765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.566 [2024-07-13 22:19:51.828134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.566 [2024-07-13 22:19:51.828179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.566 [2024-07-13 22:19:51.847379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.566 [2024-07-13 22:19:51.847729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.566 [2024-07-13 22:19:51.847775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.566 [2024-07-13 22:19:51.867198] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.566 [2024-07-13 22:19:51.867552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.566 [2024-07-13 22:19:51.867598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.566 [2024-07-13 22:19:51.886610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.566 [2024-07-13 22:19:51.886963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.566 [2024-07-13 22:19:51.887009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.566 [2024-07-13 22:19:51.906155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.566 [2024-07-13 22:19:51.906511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.566 [2024-07-13 22:19:51.906558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.566 [2024-07-13 22:19:51.925747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.566 [2024-07-13 22:19:51.926121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.566 [2024-07-13 22:19:51.926167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.566 [2024-07-13 22:19:51.945130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.566 [2024-07-13 22:19:51.945480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.566 [2024-07-13 22:19:51.945526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.825 [2024-07-13 22:19:51.964973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.825 [2024-07-13 22:19:51.965339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.825 [2024-07-13 22:19:51.965386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.825 [2024-07-13 22:19:51.984541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.825 [2024-07-13 22:19:51.984904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.825 [2024-07-13 22:19:51.984950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.825 [2024-07-13 22:19:52.004136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.825 [2024-07-13 22:19:52.004491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.825 [2024-07-13 22:19:52.004537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.825 [2024-07-13 22:19:52.023741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.825 [2024-07-13 22:19:52.024108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.825 [2024-07-13 22:19:52.024153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.825 [2024-07-13 22:19:52.043200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.825 [2024-07-13 22:19:52.043555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.825 [2024-07-13 22:19:52.043600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.825 [2024-07-13 22:19:52.062701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.825 [2024-07-13 22:19:52.063063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.825 [2024-07-13 22:19:52.063110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.826 [2024-07-13 22:19:52.082193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005480) with pdu=0x2000195fe2e8 00:36:32.826 [2024-07-13 22:19:52.082545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:32.826 [2024-07-13 22:19:52.082590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:32.826 00:36:32.826 Latency(us) 00:36:32.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:32.826 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:32.826 nvme0n1 : 2.01 12862.94 50.25 0.00 0.00 9921.29 6189.51 22524.97 00:36:32.826 =================================================================================================================== 00:36:32.826 Total : 12862.94 50.25 0.00 0.00 9921.29 6189.51 22524.97 00:36:32.826 0 00:36:32.826 22:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:32.826 22:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:32.826 22:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:32.826 | .driver_specific 00:36:32.826 | .nvme_error 00:36:32.826 | .status_code 00:36:32.826 | .command_transient_transport_error' 00:36:32.826 22:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:33.086 22:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 101 > 0 )) 00:36:33.086 22:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 45450 00:36:33.086 22:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 45450 ']' 00:36:33.086 22:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 45450 00:36:33.086 22:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:33.086 22:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:33.086 22:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 45450 00:36:33.086 22:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:33.086 22:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:33.086 22:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45450' 00:36:33.086 killing process with pid 45450 00:36:33.086 22:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 45450 00:36:33.086 Received shutdown signal, test time was about 2.000000 seconds 00:36:33.086 00:36:33.086 Latency(us) 00:36:33.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:33.086 =================================================================================================================== 00:36:33.086 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:33.086 22:19:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 45450 00:36:34.036 22:19:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:36:34.036 22:19:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:34.036 22:19:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:34.036 22:19:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:34.036 22:19:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:34.036 22:19:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=46109 00:36:34.036 22:19:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:36:34.036 22:19:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 46109 /var/tmp/bperf.sock 00:36:34.036 22:19:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 46109 ']' 00:36:34.036 22:19:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:34.036 22:19:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:34.036 22:19:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:34.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:34.036 22:19:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:34.036 22:19:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:34.294 [2024-07-13 22:19:53.444140] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:34.294 [2024-07-13 22:19:53.444296] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid46109 ] 00:36:34.294 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:34.294 Zero copy mechanism will not be used. 00:36:34.294 EAL: No free 2048 kB hugepages reported on node 1 00:36:34.294 [2024-07-13 22:19:53.565959] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:34.552 [2024-07-13 22:19:53.810508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:35.118 22:19:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:35.118 22:19:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:36:35.118 22:19:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:35.118 22:19:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:35.376 22:19:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:35.376 22:19:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.376 22:19:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:35.376 22:19:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.376 22:19:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:35.376 22:19:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:35.944 nvme0n1 00:36:35.944 22:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:35.944 22:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:35.944 22:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:35.944 22:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:35.944 22:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:35.944 22:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:35.944 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:35.944 Zero copy mechanism will not be used. 00:36:35.944 Running I/O for 2 seconds... 00:36:35.944 [2024-07-13 22:19:55.314017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:35.944 [2024-07-13 22:19:55.314695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:35.944 [2024-07-13 22:19:55.314756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:36.204 [2024-07-13 22:19:55.339799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.204 [2024-07-13 22:19:55.340418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.204 [2024-07-13 22:19:55.340462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:36.204 [2024-07-13 22:19:55.366725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.204 [2024-07-13 22:19:55.367276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.204 [2024-07-13 22:19:55.367334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:36.204 [2024-07-13 22:19:55.389523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.204 [2024-07-13 22:19:55.390025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.204 [2024-07-13 22:19:55.390067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.204 [2024-07-13 22:19:55.411540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.204 [2024-07-13 22:19:55.412069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.204 [2024-07-13 22:19:55.412127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:36.204 [2024-07-13 22:19:55.433926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.204 [2024-07-13 22:19:55.434453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.204 [2024-07-13 22:19:55.434503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:36.204 [2024-07-13 22:19:55.454090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.204 [2024-07-13 22:19:55.454615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.204 [2024-07-13 22:19:55.454663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:36.204 [2024-07-13 22:19:55.476492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.204 [2024-07-13 22:19:55.477015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.205 [2024-07-13 22:19:55.477059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.205 [2024-07-13 22:19:55.498556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.205 [2024-07-13 22:19:55.499060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.205 [2024-07-13 22:19:55.499103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:36.205 [2024-07-13 22:19:55.519532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.205 [2024-07-13 22:19:55.520056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.205 [2024-07-13 22:19:55.520098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:36.205 [2024-07-13 22:19:55.538537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.205 [2024-07-13 22:19:55.539041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.205 [2024-07-13 22:19:55.539083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:36.205 [2024-07-13 22:19:55.559008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.205 [2024-07-13 22:19:55.559465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.205 [2024-07-13 22:19:55.559503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.205 [2024-07-13 22:19:55.578289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.205 [2024-07-13 22:19:55.578739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.205 [2024-07-13 22:19:55.578780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:36.465 [2024-07-13 22:19:55.599972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.465 [2024-07-13 22:19:55.600535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.465 [2024-07-13 22:19:55.600578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:36.465 [2024-07-13 22:19:55.622253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.465 [2024-07-13 22:19:55.622679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.465 [2024-07-13 22:19:55.622718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:36.465 [2024-07-13 22:19:55.644364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.465 [2024-07-13 22:19:55.644818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.465 [2024-07-13 22:19:55.644878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.465 [2024-07-13 22:19:55.664925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.465 [2024-07-13 22:19:55.665355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.466 [2024-07-13 22:19:55.665395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:36.466 [2024-07-13 22:19:55.683734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.466 [2024-07-13 22:19:55.684225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.466 [2024-07-13 22:19:55.684264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:36.466 [2024-07-13 22:19:55.703003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.466 [2024-07-13 22:19:55.703446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.466 [2024-07-13 22:19:55.703484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:36.466 [2024-07-13 22:19:55.721455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.466 [2024-07-13 22:19:55.721905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.466 [2024-07-13 22:19:55.721952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.466 [2024-07-13 22:19:55.740537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.466 [2024-07-13 22:19:55.741002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.466 [2024-07-13 22:19:55.741043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:36.466 [2024-07-13 22:19:55.760432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.466 [2024-07-13 22:19:55.760843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.466 [2024-07-13 22:19:55.760907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:36.466 [2024-07-13 22:19:55.779454] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.466 [2024-07-13 22:19:55.779880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.466 [2024-07-13 22:19:55.779921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:36.466 [2024-07-13 22:19:55.800507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.466 [2024-07-13 22:19:55.800988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.466 [2024-07-13 22:19:55.801029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.466 [2024-07-13 22:19:55.826589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.466 [2024-07-13 22:19:55.827046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.466 [2024-07-13 22:19:55.827087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:36.466 [2024-07-13 22:19:55.850505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.466 [2024-07-13 22:19:55.850977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.466 [2024-07-13 22:19:55.851021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:36.724 [2024-07-13 22:19:55.871936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.724 [2024-07-13 22:19:55.872387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.725 [2024-07-13 22:19:55.872426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:36.725 [2024-07-13 22:19:55.890324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.725 [2024-07-13 22:19:55.890751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.725 [2024-07-13 22:19:55.890791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.725 [2024-07-13 22:19:55.908593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.725 [2024-07-13 22:19:55.909044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.725 [2024-07-13 22:19:55.909085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:36.725 [2024-07-13 22:19:55.930501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.725 [2024-07-13 22:19:55.930977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.725 [2024-07-13 22:19:55.931019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:36.725 [2024-07-13 22:19:55.953823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.725 [2024-07-13 22:19:55.954314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.725 [2024-07-13 22:19:55.954354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:36.725 [2024-07-13 22:19:55.971658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.725 [2024-07-13 22:19:55.972073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.725 [2024-07-13 22:19:55.972113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.725 [2024-07-13 22:19:55.992014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.725 [2024-07-13 22:19:55.992452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.725 [2024-07-13 22:19:55.992491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:36.725 [2024-07-13 22:19:56.011042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.725 [2024-07-13 22:19:56.011493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.725 [2024-07-13 22:19:56.011532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:36.725 [2024-07-13 22:19:56.030654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.725 [2024-07-13 22:19:56.031130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.725 [2024-07-13 22:19:56.031185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:36.725 [2024-07-13 22:19:56.050266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.725 [2024-07-13 22:19:56.050699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.725 [2024-07-13 22:19:56.050737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.725 [2024-07-13 22:19:56.069531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.725 [2024-07-13 22:19:56.069992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.725 [2024-07-13 22:19:56.070033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:36.725 [2024-07-13 22:19:56.088770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.725 [2024-07-13 22:19:56.089242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.725 [2024-07-13 22:19:56.089282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:36.725 [2024-07-13 22:19:56.108570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.725 [2024-07-13 22:19:56.109041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.725 [2024-07-13 22:19:56.109084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:36.986 [2024-07-13 22:19:56.128814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.986 [2024-07-13 22:19:56.129291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.986 [2024-07-13 22:19:56.129331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.986 [2024-07-13 22:19:56.148295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.986 [2024-07-13 22:19:56.148719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.986 [2024-07-13 22:19:56.148759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:36.986 [2024-07-13 22:19:56.167722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.986 [2024-07-13 22:19:56.168218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.986 [2024-07-13 22:19:56.168259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:36.986 [2024-07-13 22:19:56.188016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.986 [2024-07-13 22:19:56.188470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.986 [2024-07-13 22:19:56.188508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:36.986 [2024-07-13 22:19:56.207612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.986 [2024-07-13 22:19:56.208076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.986 [2024-07-13 22:19:56.208117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.986 [2024-07-13 22:19:56.226645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.986 [2024-07-13 22:19:56.227087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.986 [2024-07-13 22:19:56.227128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:36.986 [2024-07-13 22:19:56.245705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.986 [2024-07-13 22:19:56.246164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.986 [2024-07-13 22:19:56.246218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:36.986 [2024-07-13 22:19:56.263638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.986 [2024-07-13 22:19:56.264090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.986 [2024-07-13 22:19:56.264149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:36.986 [2024-07-13 22:19:56.282506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.986 [2024-07-13 22:19:56.282912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.986 [2024-07-13 22:19:56.282953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:36.986 [2024-07-13 22:19:56.302163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.986 [2024-07-13 22:19:56.302623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.986 [2024-07-13 22:19:56.302663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:36.986 [2024-07-13 22:19:56.321666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.986 [2024-07-13 22:19:56.322234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.986 [2024-07-13 22:19:56.322275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:36.986 [2024-07-13 22:19:56.345476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.986 [2024-07-13 22:19:56.345933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.986 [2024-07-13 22:19:56.345975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:36.986 [2024-07-13 22:19:56.364349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:36.986 [2024-07-13 22:19:56.364769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.987 [2024-07-13 22:19:56.364809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.248 [2024-07-13 22:19:56.384785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.248 [2024-07-13 22:19:56.385386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.248 [2024-07-13 22:19:56.385427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.248 [2024-07-13 22:19:56.405634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.248 [2024-07-13 22:19:56.406110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.248 [2024-07-13 22:19:56.406150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.248 [2024-07-13 22:19:56.423830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.248 [2024-07-13 22:19:56.424327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.248 [2024-07-13 22:19:56.424365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.248 [2024-07-13 22:19:56.444081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.248 [2024-07-13 22:19:56.444557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.248 [2024-07-13 22:19:56.444599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.248 [2024-07-13 22:19:56.466757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.248 [2024-07-13 22:19:56.467246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.249 [2024-07-13 22:19:56.467299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.249 [2024-07-13 22:19:56.485429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.249 [2024-07-13 22:19:56.485876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.249 [2024-07-13 22:19:56.485940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.249 [2024-07-13 22:19:56.507935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.249 [2024-07-13 22:19:56.508384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.249 [2024-07-13 22:19:56.508423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.249 [2024-07-13 22:19:56.532565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.249 [2024-07-13 22:19:56.533034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.249 [2024-07-13 22:19:56.533077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.249 [2024-07-13 22:19:56.551449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.249 [2024-07-13 22:19:56.551929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.249 [2024-07-13 22:19:56.551971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.249 [2024-07-13 22:19:56.573149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.249 [2024-07-13 22:19:56.573602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.249 [2024-07-13 22:19:56.573641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.249 [2024-07-13 22:19:56.595933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.249 [2024-07-13 22:19:56.596406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.249 [2024-07-13 22:19:56.596455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.249 [2024-07-13 22:19:56.614316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.249 [2024-07-13 22:19:56.614743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.249 [2024-07-13 22:19:56.614783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.249 [2024-07-13 22:19:56.633137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.249 [2024-07-13 22:19:56.633581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.249 [2024-07-13 22:19:56.633621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.510 [2024-07-13 22:19:56.654277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.510 [2024-07-13 22:19:56.654732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.510 [2024-07-13 22:19:56.654772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.510 [2024-07-13 22:19:56.675507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.510 [2024-07-13 22:19:56.675976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.510 [2024-07-13 22:19:56.676018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.510 [2024-07-13 22:19:56.693072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.510 [2024-07-13 22:19:56.693478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.510 [2024-07-13 22:19:56.693517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.510 [2024-07-13 22:19:56.710560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.510 [2024-07-13 22:19:56.710992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.510 [2024-07-13 22:19:56.711034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.510 [2024-07-13 22:19:56.729738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.510 [2024-07-13 22:19:56.730210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.510 [2024-07-13 22:19:56.730252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.510 [2024-07-13 22:19:56.748658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.510 [2024-07-13 22:19:56.749130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.510 [2024-07-13 22:19:56.749186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.510 [2024-07-13 22:19:56.766975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.510 [2024-07-13 22:19:56.767439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.510 [2024-07-13 22:19:56.767478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.510 [2024-07-13 22:19:56.785282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.510 [2024-07-13 22:19:56.785704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.510 [2024-07-13 22:19:56.785744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.510 [2024-07-13 22:19:56.805289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.510 [2024-07-13 22:19:56.805724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.510 [2024-07-13 22:19:56.805763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.510 [2024-07-13 22:19:56.824334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.510 [2024-07-13 22:19:56.824807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.510 [2024-07-13 22:19:56.824874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.510 [2024-07-13 22:19:56.845770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.510 [2024-07-13 22:19:56.846258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.510 [2024-07-13 22:19:56.846301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.510 [2024-07-13 22:19:56.869559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.510 [2024-07-13 22:19:56.870053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.510 [2024-07-13 22:19:56.870099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.510 [2024-07-13 22:19:56.890195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.510 [2024-07-13 22:19:56.890675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.510 [2024-07-13 22:19:56.890719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.772 [2024-07-13 22:19:56.909639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.772 [2024-07-13 22:19:56.909969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.772 [2024-07-13 22:19:56.910014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.772 [2024-07-13 22:19:56.934430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.772 [2024-07-13 22:19:56.934953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.772 [2024-07-13 22:19:56.935010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.772 [2024-07-13 22:19:56.957717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.772 [2024-07-13 22:19:56.958232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.772 [2024-07-13 22:19:56.958277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.772 [2024-07-13 22:19:56.977896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.772 [2024-07-13 22:19:56.978378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.772 [2024-07-13 22:19:56.978421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.772 [2024-07-13 22:19:56.997182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.772 [2024-07-13 22:19:56.997644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.772 [2024-07-13 22:19:56.997687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.772 [2024-07-13 22:19:57.018325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.772 [2024-07-13 22:19:57.018798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.772 [2024-07-13 22:19:57.018842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.772 [2024-07-13 22:19:57.037257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.772 [2024-07-13 22:19:57.037716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.772 [2024-07-13 22:19:57.037759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.772 [2024-07-13 22:19:57.055423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.772 [2024-07-13 22:19:57.055898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.772 [2024-07-13 22:19:57.055943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.772 [2024-07-13 22:19:57.075188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.772 [2024-07-13 22:19:57.075665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.772 [2024-07-13 22:19:57.075707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:37.772 [2024-07-13 22:19:57.095257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.772 [2024-07-13 22:19:57.095695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.772 [2024-07-13 22:19:57.095738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:37.772 [2024-07-13 22:19:57.114217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.772 [2024-07-13 22:19:57.114697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.772 [2024-07-13 22:19:57.114740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:37.772 [2024-07-13 22:19:57.136198] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.772 [2024-07-13 22:19:57.136692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.772 [2024-07-13 22:19:57.136738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:37.772 [2024-07-13 22:19:57.156402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:37.772 [2024-07-13 22:19:57.156892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:37.772 [2024-07-13 22:19:57.156937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.031 [2024-07-13 22:19:57.177700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:38.031 [2024-07-13 22:19:57.178189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.031 [2024-07-13 22:19:57.178245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.031 [2024-07-13 22:19:57.197246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:38.031 [2024-07-13 22:19:57.197702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.031 [2024-07-13 22:19:57.197743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.031 [2024-07-13 22:19:57.216812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:38.031 [2024-07-13 22:19:57.217326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.031 [2024-07-13 22:19:57.217369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:38.032 [2024-07-13 22:19:57.238534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:38.032 [2024-07-13 22:19:57.239041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.032 [2024-07-13 22:19:57.239086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:38.032 [2024-07-13 22:19:57.258491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:38.032 [2024-07-13 22:19:57.259003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.032 [2024-07-13 22:19:57.259048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:38.032 [2024-07-13 22:19:57.283258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000006080) with pdu=0x2000195fef90 00:36:38.032 [2024-07-13 22:19:57.283724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:38.032 [2024-07-13 22:19:57.283775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:38.032 00:36:38.032 Latency(us) 00:36:38.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:38.032 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:38.032 nvme0n1 : 2.01 1505.87 188.23 0.00 0.00 10589.52 6699.24 27767.85 00:36:38.032 =================================================================================================================== 00:36:38.032 Total : 1505.87 188.23 0.00 0.00 10589.52 6699.24 27767.85 00:36:38.032 0 00:36:38.032 22:19:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:38.032 22:19:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:38.032 22:19:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:38.032 22:19:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:38.032 | .driver_specific 00:36:38.032 | .nvme_error 00:36:38.032 | .status_code 00:36:38.032 | .command_transient_transport_error' 00:36:38.292 22:19:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 97 > 0 )) 00:36:38.292 22:19:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 46109 00:36:38.292 22:19:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 46109 ']' 00:36:38.292 22:19:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 46109 00:36:38.292 22:19:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:38.292 22:19:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:38.292 22:19:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 46109 00:36:38.292 22:19:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:38.292 22:19:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:38.292 22:19:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46109' 00:36:38.292 killing process with pid 46109 00:36:38.292 22:19:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 46109 00:36:38.292 Received shutdown signal, test time was about 2.000000 seconds 00:36:38.292 00:36:38.292 Latency(us) 00:36:38.292 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:38.292 =================================================================================================================== 00:36:38.292 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:38.292 22:19:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 46109 00:36:39.228 22:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 44086 00:36:39.228 22:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 44086 ']' 00:36:39.228 22:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 44086 00:36:39.228 22:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:36:39.228 22:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:39.228 22:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 44086 00:36:39.228 22:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:39.228 22:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:39.228 22:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 44086' 00:36:39.228 killing process with pid 44086 00:36:39.228 22:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 44086 00:36:39.228 22:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 44086 00:36:40.606 00:36:40.606 real 0m23.503s 00:36:40.606 user 0m45.542s 00:36:40.606 sys 0m4.525s 00:36:40.606 22:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:40.606 22:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:40.606 ************************************ 00:36:40.606 END TEST nvmf_digest_error 00:36:40.606 ************************************ 00:36:40.606 22:19:59 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:36:40.606 22:19:59 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:36:40.606 22:19:59 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:36:40.606 22:19:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:40.606 22:19:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:36:40.606 22:19:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:40.606 22:19:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:36:40.606 22:19:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:40.606 22:19:59 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:40.606 rmmod nvme_tcp 00:36:40.606 rmmod nvme_fabrics 00:36:40.606 rmmod nvme_keyring 00:36:40.865 22:20:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:40.865 22:20:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:36:40.865 22:20:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:36:40.865 22:20:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 44086 ']' 00:36:40.865 22:20:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 44086 00:36:40.865 22:20:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 44086 ']' 00:36:40.865 22:20:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 44086 00:36:40.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (44086) - No such process 00:36:40.865 22:20:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 44086 is not found' 00:36:40.865 Process with pid 44086 is not found 00:36:40.865 22:20:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:40.865 22:20:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:40.865 22:20:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:40.865 22:20:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:40.865 22:20:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:40.865 22:20:00 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:40.865 22:20:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:40.865 22:20:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:42.771 22:20:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:42.771 00:36:42.771 real 0m52.260s 00:36:42.771 user 1m33.824s 00:36:42.771 sys 0m10.560s 00:36:42.771 22:20:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:42.771 22:20:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:42.771 ************************************ 00:36:42.771 END TEST nvmf_digest 00:36:42.771 ************************************ 00:36:42.771 22:20:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:36:42.771 22:20:02 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:36:42.771 22:20:02 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:36:42.771 22:20:02 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:36:42.771 22:20:02 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:42.771 22:20:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:36:42.771 22:20:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:42.771 22:20:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:42.771 ************************************ 00:36:42.771 START TEST nvmf_bdevperf 00:36:42.771 ************************************ 00:36:42.771 22:20:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:42.771 * Looking for test storage... 00:36:42.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:42.771 22:20:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:42.771 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:36:42.771 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:42.771 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:42.771 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:42.771 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:42.771 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:42.771 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:42.771 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:42.771 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:42.771 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:42.771 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:42.771 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:42.771 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:42.771 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:42.771 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:42.771 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:42.771 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:42.771 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:42.771 22:20:02 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:42.771 22:20:02 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:42.771 22:20:02 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:42.771 22:20:02 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.772 22:20:02 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.772 22:20:02 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.772 22:20:02 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:36:42.772 22:20:02 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:42.772 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:36:42.772 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:42.772 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:43.031 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:43.031 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:43.031 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:43.031 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:43.031 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:43.031 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:43.031 22:20:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:43.031 22:20:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:43.031 22:20:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:36:43.031 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:43.031 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:43.031 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:43.031 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:43.031 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:43.031 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:43.031 22:20:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:43.031 22:20:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:43.031 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:43.031 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:43.031 22:20:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:36:43.031 22:20:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:44.934 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:44.934 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:44.934 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:44.934 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:44.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:44.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:36:44.934 00:36:44.934 --- 10.0.0.2 ping statistics --- 00:36:44.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:44.934 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:44.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:44.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:36:44.934 00:36:44.934 --- 10.0.0.1 ping statistics --- 00:36:44.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:44.934 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:44.934 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:44.935 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:44.935 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:44.935 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:44.935 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:44.935 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:44.935 22:20:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:36:44.935 22:20:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:44.935 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:44.935 22:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:44.935 22:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:44.935 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=48732 00:36:44.935 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 48732 00:36:44.935 22:20:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:44.935 22:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 48732 ']' 00:36:44.935 22:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:44.935 22:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:44.935 22:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:44.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:44.935 22:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:44.935 22:20:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:44.935 [2024-07-13 22:20:04.280486] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:44.935 [2024-07-13 22:20:04.280620] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:45.191 EAL: No free 2048 kB hugepages reported on node 1 00:36:45.191 [2024-07-13 22:20:04.425198] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:45.448 [2024-07-13 22:20:04.689259] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:45.448 [2024-07-13 22:20:04.689336] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:45.448 [2024-07-13 22:20:04.689388] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:45.448 [2024-07-13 22:20:04.689423] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:45.448 [2024-07-13 22:20:04.689457] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:45.448 [2024-07-13 22:20:04.689601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:45.448 [2024-07-13 22:20:04.689677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:45.448 [2024-07-13 22:20:04.689679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:46.051 [2024-07-13 22:20:05.247988] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:46.051 Malloc0 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:46.051 [2024-07-13 22:20:05.367008] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:46.051 { 00:36:46.051 "params": { 00:36:46.051 "name": "Nvme$subsystem", 00:36:46.051 "trtype": "$TEST_TRANSPORT", 00:36:46.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:46.051 "adrfam": "ipv4", 00:36:46.051 "trsvcid": "$NVMF_PORT", 00:36:46.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:46.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:46.051 "hdgst": ${hdgst:-false}, 00:36:46.051 "ddgst": ${ddgst:-false} 00:36:46.051 }, 00:36:46.051 "method": "bdev_nvme_attach_controller" 00:36:46.051 } 00:36:46.051 EOF 00:36:46.051 )") 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:36:46.051 22:20:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:46.051 "params": { 00:36:46.051 "name": "Nvme1", 00:36:46.051 "trtype": "tcp", 00:36:46.051 "traddr": "10.0.0.2", 00:36:46.051 "adrfam": "ipv4", 00:36:46.051 "trsvcid": "4420", 00:36:46.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:46.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:46.052 "hdgst": false, 00:36:46.052 "ddgst": false 00:36:46.052 }, 00:36:46.052 "method": "bdev_nvme_attach_controller" 00:36:46.052 }' 00:36:46.312 [2024-07-13 22:20:05.452895] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:46.312 [2024-07-13 22:20:05.453043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid48933 ] 00:36:46.312 EAL: No free 2048 kB hugepages reported on node 1 00:36:46.312 [2024-07-13 22:20:05.577821] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:46.571 [2024-07-13 22:20:05.814224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:47.140 Running I/O for 1 seconds... 00:36:48.075 00:36:48.075 Latency(us) 00:36:48.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:48.075 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:48.075 Verification LBA range: start 0x0 length 0x4000 00:36:48.075 Nvme1n1 : 1.01 6177.02 24.13 0.00 0.00 20636.51 2548.62 20777.34 00:36:48.075 =================================================================================================================== 00:36:48.075 Total : 6177.02 24.13 0.00 0.00 20636.51 2548.62 20777.34 00:36:49.014 22:20:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=49271 00:36:49.014 22:20:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:36:49.014 22:20:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:36:49.014 22:20:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:36:49.272 22:20:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:36:49.272 22:20:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:36:49.272 22:20:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:49.272 22:20:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:49.272 { 00:36:49.272 "params": { 00:36:49.272 "name": "Nvme$subsystem", 00:36:49.272 "trtype": "$TEST_TRANSPORT", 00:36:49.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:49.272 "adrfam": "ipv4", 00:36:49.272 "trsvcid": "$NVMF_PORT", 00:36:49.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:49.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:49.272 "hdgst": ${hdgst:-false}, 00:36:49.272 "ddgst": ${ddgst:-false} 00:36:49.272 }, 00:36:49.272 "method": "bdev_nvme_attach_controller" 00:36:49.272 } 00:36:49.272 EOF 00:36:49.272 )") 00:36:49.272 22:20:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:36:49.272 22:20:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:36:49.272 22:20:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:36:49.272 22:20:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:49.272 "params": { 00:36:49.273 "name": "Nvme1", 00:36:49.273 "trtype": "tcp", 00:36:49.273 "traddr": "10.0.0.2", 00:36:49.273 "adrfam": "ipv4", 00:36:49.273 "trsvcid": "4420", 00:36:49.273 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:49.273 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:49.273 "hdgst": false, 00:36:49.273 "ddgst": false 00:36:49.273 }, 00:36:49.273 "method": "bdev_nvme_attach_controller" 00:36:49.273 }' 00:36:49.273 [2024-07-13 22:20:08.487636] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:49.273 [2024-07-13 22:20:08.487777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49271 ] 00:36:49.273 EAL: No free 2048 kB hugepages reported on node 1 00:36:49.273 [2024-07-13 22:20:08.617877] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:49.544 [2024-07-13 22:20:08.851490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:50.109 Running I/O for 15 seconds... 00:36:52.652 22:20:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 48732 00:36:52.652 22:20:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:36:52.652 [2024-07-13 22:20:11.434666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.652 [2024-07-13 22:20:11.434731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.652 [2024-07-13 22:20:11.434789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.652 [2024-07-13 22:20:11.434819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.652 [2024-07-13 22:20:11.434859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.652 [2024-07-13 22:20:11.434895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.652 [2024-07-13 22:20:11.434938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.652 [2024-07-13 22:20:11.434963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.652 [2024-07-13 22:20:11.434997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.652 [2024-07-13 22:20:11.435022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.652 [2024-07-13 22:20:11.435046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.652 [2024-07-13 22:20:11.435068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.652 [2024-07-13 22:20:11.435091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.652 [2024-07-13 22:20:11.435112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.652 [2024-07-13 22:20:11.435135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.652 [2024-07-13 22:20:11.435173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.652 [2024-07-13 22:20:11.435200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.652 [2024-07-13 22:20:11.435223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.652 [2024-07-13 22:20:11.435248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.652 [2024-07-13 22:20:11.435271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.652 [2024-07-13 22:20:11.435296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.652 [2024-07-13 22:20:11.435319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.652 [2024-07-13 22:20:11.435345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.652 [2024-07-13 22:20:11.435372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.652 [2024-07-13 22:20:11.435415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.652 [2024-07-13 22:20:11.435439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.652 [2024-07-13 22:20:11.435465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.652 [2024-07-13 22:20:11.435489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.652 [2024-07-13 22:20:11.435514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.435537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.435563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.435586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.435612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.435634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.435664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.435688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.435713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.435737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.435762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.435785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.435810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.435833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.435877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.435902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.435942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.435962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.435984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.436004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.436026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.436046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.436068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.436088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.436109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.436130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.436172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.436191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.436212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.436249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.436275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.436306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.436333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.436357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.436381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.436405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.436431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.436454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.436479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.436502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.436528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.436551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.436576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.436599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.436625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.436648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.436673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.436697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.436722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.436745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.436770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.436793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.436818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.436841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.436878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.653 [2024-07-13 22:20:11.436918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.436946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.653 [2024-07-13 22:20:11.436967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.436990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.653 [2024-07-13 22:20:11.437010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.437032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.653 [2024-07-13 22:20:11.437052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.437074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.653 [2024-07-13 22:20:11.437094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.437117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.653 [2024-07-13 22:20:11.437137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.437175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.653 [2024-07-13 22:20:11.437195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.437233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.653 [2024-07-13 22:20:11.437257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.437282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.437306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.437331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.437354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.437379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.437402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.437427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.437450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.437476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.437499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.437524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.437547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.437577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.437601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.653 [2024-07-13 22:20:11.437626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.653 [2024-07-13 22:20:11.437648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.437674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.437697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.437723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.437746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.437771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.437794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.437819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.437842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.437875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.437915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.437940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.437960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.437983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.438003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.438026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.438046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.438068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.438088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.438110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.438130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.438169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.438197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.438224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.438247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.438272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.438295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.438320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.438343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.438369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.438392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.438417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.438439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.438465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.438488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.438514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.438537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.438561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.438584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.438625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.438649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.438674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.438698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.438723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.438746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.438772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.438795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.438825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.438858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.438893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.654 [2024-07-13 22:20:11.438933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.438957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.654 [2024-07-13 22:20:11.438977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.439000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.439019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.439041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.439062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.439084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.439104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.439126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.439163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.439191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.439214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.439239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.439262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.439288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.439311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.439336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.439359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.439384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.439408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.439433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.439461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.439487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.439511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.439536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.439559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.439585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.439608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.439633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.439657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.439682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.439705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.439730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.439753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.654 [2024-07-13 22:20:11.439779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.654 [2024-07-13 22:20:11.439802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.439827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.655 [2024-07-13 22:20:11.439858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.439892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.655 [2024-07-13 22:20:11.439931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.439955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.655 [2024-07-13 22:20:11.439975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.439997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.655 [2024-07-13 22:20:11.440017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.440040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.655 [2024-07-13 22:20:11.440059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.440082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.655 [2024-07-13 22:20:11.440106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.440129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.655 [2024-07-13 22:20:11.440168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.440195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.655 [2024-07-13 22:20:11.440218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.440244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.655 [2024-07-13 22:20:11.440267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.440295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.655 [2024-07-13 22:20:11.440317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.440343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.655 [2024-07-13 22:20:11.440366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.440392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.655 [2024-07-13 22:20:11.440415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.440441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:52.655 [2024-07-13 22:20:11.440464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.440490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.655 [2024-07-13 22:20:11.440513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.440538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.655 [2024-07-13 22:20:11.440561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.440586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.655 [2024-07-13 22:20:11.440609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.440634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.655 [2024-07-13 22:20:11.440657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.440681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.655 [2024-07-13 22:20:11.440704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.440735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.655 [2024-07-13 22:20:11.440758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.440784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.655 [2024-07-13 22:20:11.440807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.440833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.655 [2024-07-13 22:20:11.440863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.440913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.655 [2024-07-13 22:20:11.440935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.440958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.655 [2024-07-13 22:20:11.440978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.441001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.655 [2024-07-13 22:20:11.441021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.441044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.655 [2024-07-13 22:20:11.441064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.441086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.655 [2024-07-13 22:20:11.441106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.441128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:52.655 [2024-07-13 22:20:11.441163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.441188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2c80 is same with the state(5) to be set 00:36:52.655 [2024-07-13 22:20:11.441215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:52.655 [2024-07-13 22:20:11.441235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:52.655 [2024-07-13 22:20:11.441256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98616 len:8 PRP1 0x0 PRP2 0x0 00:36:52.655 [2024-07-13 22:20:11.441278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.441579] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150001f2c80 was disconnected and freed. reset controller. 00:36:52.655 [2024-07-13 22:20:11.441687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:52.655 [2024-07-13 22:20:11.441725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.441757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:52.655 [2024-07-13 22:20:11.441780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.441802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:52.655 [2024-07-13 22:20:11.441824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.441846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:52.655 [2024-07-13 22:20:11.441878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:52.655 [2024-07-13 22:20:11.441918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.655 [2024-07-13 22:20:11.446030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.655 [2024-07-13 22:20:11.446090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.655 [2024-07-13 22:20:11.446986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.655 [2024-07-13 22:20:11.447033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.655 [2024-07-13 22:20:11.447075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.655 [2024-07-13 22:20:11.447381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.655 [2024-07-13 22:20:11.447674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.655 [2024-07-13 22:20:11.447707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.655 [2024-07-13 22:20:11.447732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.655 [2024-07-13 22:20:11.451910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.655 [2024-07-13 22:20:11.461084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.655 [2024-07-13 22:20:11.461740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.655 [2024-07-13 22:20:11.461802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.655 [2024-07-13 22:20:11.461828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.655 [2024-07-13 22:20:11.462127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.655 [2024-07-13 22:20:11.462420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.655 [2024-07-13 22:20:11.462451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.655 [2024-07-13 22:20:11.462473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.655 [2024-07-13 22:20:11.466653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.655 [2024-07-13 22:20:11.475725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.656 [2024-07-13 22:20:11.476238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.656 [2024-07-13 22:20:11.476280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.656 [2024-07-13 22:20:11.476311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.656 [2024-07-13 22:20:11.476601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.656 [2024-07-13 22:20:11.476904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.656 [2024-07-13 22:20:11.476935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.656 [2024-07-13 22:20:11.476957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.656 [2024-07-13 22:20:11.481133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.656 [2024-07-13 22:20:11.490211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.656 [2024-07-13 22:20:11.490701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.656 [2024-07-13 22:20:11.490742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.656 [2024-07-13 22:20:11.490768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.656 [2024-07-13 22:20:11.491068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.656 [2024-07-13 22:20:11.491358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.656 [2024-07-13 22:20:11.491389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.656 [2024-07-13 22:20:11.491411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.656 [2024-07-13 22:20:11.495555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.656 [2024-07-13 22:20:11.504802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.656 [2024-07-13 22:20:11.505343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.656 [2024-07-13 22:20:11.505384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.656 [2024-07-13 22:20:11.505410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.656 [2024-07-13 22:20:11.505695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.656 [2024-07-13 22:20:11.505997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.656 [2024-07-13 22:20:11.506029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.656 [2024-07-13 22:20:11.506051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.656 [2024-07-13 22:20:11.510183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.656 [2024-07-13 22:20:11.519434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.656 [2024-07-13 22:20:11.519970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.656 [2024-07-13 22:20:11.520011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.656 [2024-07-13 22:20:11.520037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.656 [2024-07-13 22:20:11.520322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.656 [2024-07-13 22:20:11.520617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.656 [2024-07-13 22:20:11.520648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.656 [2024-07-13 22:20:11.520669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.656 [2024-07-13 22:20:11.524809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.656 [2024-07-13 22:20:11.534061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.656 [2024-07-13 22:20:11.534597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.656 [2024-07-13 22:20:11.534647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.656 [2024-07-13 22:20:11.534670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.656 [2024-07-13 22:20:11.534993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.656 [2024-07-13 22:20:11.535281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.656 [2024-07-13 22:20:11.535312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.656 [2024-07-13 22:20:11.535334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.656 [2024-07-13 22:20:11.539470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.656 [2024-07-13 22:20:11.548764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.656 [2024-07-13 22:20:11.549353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.656 [2024-07-13 22:20:11.549403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.656 [2024-07-13 22:20:11.549426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.656 [2024-07-13 22:20:11.549724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.656 [2024-07-13 22:20:11.550025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.656 [2024-07-13 22:20:11.550056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.656 [2024-07-13 22:20:11.550078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.656 [2024-07-13 22:20:11.554219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.656 [2024-07-13 22:20:11.563231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.656 [2024-07-13 22:20:11.563743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.656 [2024-07-13 22:20:11.563783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.656 [2024-07-13 22:20:11.563809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.656 [2024-07-13 22:20:11.564107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.656 [2024-07-13 22:20:11.564396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.656 [2024-07-13 22:20:11.564426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.656 [2024-07-13 22:20:11.564448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.656 [2024-07-13 22:20:11.568603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.656 [2024-07-13 22:20:11.577886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.656 [2024-07-13 22:20:11.578383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.656 [2024-07-13 22:20:11.578418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.656 [2024-07-13 22:20:11.578439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.656 [2024-07-13 22:20:11.578733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.656 [2024-07-13 22:20:11.579036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.656 [2024-07-13 22:20:11.579067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.656 [2024-07-13 22:20:11.579089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.656 [2024-07-13 22:20:11.583230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.656 [2024-07-13 22:20:11.592492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.656 [2024-07-13 22:20:11.592994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.656 [2024-07-13 22:20:11.593035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.656 [2024-07-13 22:20:11.593061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.657 [2024-07-13 22:20:11.593345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.657 [2024-07-13 22:20:11.593634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.657 [2024-07-13 22:20:11.593665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.657 [2024-07-13 22:20:11.593686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.657 [2024-07-13 22:20:11.597825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.657 [2024-07-13 22:20:11.607086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.657 [2024-07-13 22:20:11.607608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.657 [2024-07-13 22:20:11.607648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.657 [2024-07-13 22:20:11.607674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.657 [2024-07-13 22:20:11.607971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.657 [2024-07-13 22:20:11.608259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.657 [2024-07-13 22:20:11.608290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.657 [2024-07-13 22:20:11.608312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.657 [2024-07-13 22:20:11.612463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.657 [2024-07-13 22:20:11.621722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.657 [2024-07-13 22:20:11.622254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.657 [2024-07-13 22:20:11.622300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.657 [2024-07-13 22:20:11.622326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.657 [2024-07-13 22:20:11.622612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.657 [2024-07-13 22:20:11.622910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.657 [2024-07-13 22:20:11.622942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.657 [2024-07-13 22:20:11.622963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.657 [2024-07-13 22:20:11.627118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.657 [2024-07-13 22:20:11.636350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.657 [2024-07-13 22:20:11.636957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.657 [2024-07-13 22:20:11.636997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.657 [2024-07-13 22:20:11.637023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.657 [2024-07-13 22:20:11.637309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.657 [2024-07-13 22:20:11.637595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.657 [2024-07-13 22:20:11.637626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.657 [2024-07-13 22:20:11.637648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.657 [2024-07-13 22:20:11.641768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.657 [2024-07-13 22:20:11.651021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.657 [2024-07-13 22:20:11.651544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.657 [2024-07-13 22:20:11.651584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.657 [2024-07-13 22:20:11.651608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.657 [2024-07-13 22:20:11.651902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.657 [2024-07-13 22:20:11.652189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.657 [2024-07-13 22:20:11.652220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.657 [2024-07-13 22:20:11.652241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.657 [2024-07-13 22:20:11.656380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.657 [2024-07-13 22:20:11.665615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.657 [2024-07-13 22:20:11.666115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.657 [2024-07-13 22:20:11.666157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.657 [2024-07-13 22:20:11.666197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.657 [2024-07-13 22:20:11.666483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.657 [2024-07-13 22:20:11.666777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.657 [2024-07-13 22:20:11.666808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.657 [2024-07-13 22:20:11.666829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.657 [2024-07-13 22:20:11.670968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.657 [2024-07-13 22:20:11.680280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.657 [2024-07-13 22:20:11.680792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.657 [2024-07-13 22:20:11.680833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.657 [2024-07-13 22:20:11.680858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.657 [2024-07-13 22:20:11.681155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.657 [2024-07-13 22:20:11.681443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.657 [2024-07-13 22:20:11.681473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.657 [2024-07-13 22:20:11.681496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.657 [2024-07-13 22:20:11.685622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.657 [2024-07-13 22:20:11.696221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.657 [2024-07-13 22:20:11.696889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.657 [2024-07-13 22:20:11.696935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.657 [2024-07-13 22:20:11.696962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.657 [2024-07-13 22:20:11.697252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.657 [2024-07-13 22:20:11.697637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.657 [2024-07-13 22:20:11.697683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.657 [2024-07-13 22:20:11.697721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.657 [2024-07-13 22:20:11.702862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.657 [2024-07-13 22:20:11.710742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.657 [2024-07-13 22:20:11.711307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.657 [2024-07-13 22:20:11.711350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.657 [2024-07-13 22:20:11.711377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.657 [2024-07-13 22:20:11.711663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.657 [2024-07-13 22:20:11.711962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.657 [2024-07-13 22:20:11.711994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.657 [2024-07-13 22:20:11.712016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.657 [2024-07-13 22:20:11.716148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.657 [2024-07-13 22:20:11.725390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.657 [2024-07-13 22:20:11.725902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.657 [2024-07-13 22:20:11.725945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.657 [2024-07-13 22:20:11.725970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.657 [2024-07-13 22:20:11.726257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.657 [2024-07-13 22:20:11.726545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.657 [2024-07-13 22:20:11.726576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.657 [2024-07-13 22:20:11.726597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.657 [2024-07-13 22:20:11.730735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.657 [2024-07-13 22:20:11.739983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.657 [2024-07-13 22:20:11.740504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.657 [2024-07-13 22:20:11.740544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.657 [2024-07-13 22:20:11.740569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.657 [2024-07-13 22:20:11.740855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.657 [2024-07-13 22:20:11.741154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.657 [2024-07-13 22:20:11.741185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.657 [2024-07-13 22:20:11.741207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.657 [2024-07-13 22:20:11.745332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.657 [2024-07-13 22:20:11.754569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.657 [2024-07-13 22:20:11.755120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.657 [2024-07-13 22:20:11.755161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.658 [2024-07-13 22:20:11.755186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.658 [2024-07-13 22:20:11.755471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.658 [2024-07-13 22:20:11.755758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.658 [2024-07-13 22:20:11.755789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.658 [2024-07-13 22:20:11.755811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.658 [2024-07-13 22:20:11.759950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.658 [2024-07-13 22:20:11.769182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.658 [2024-07-13 22:20:11.769696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.658 [2024-07-13 22:20:11.769742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.658 [2024-07-13 22:20:11.769767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.658 [2024-07-13 22:20:11.770066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.658 [2024-07-13 22:20:11.770352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.658 [2024-07-13 22:20:11.770382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.658 [2024-07-13 22:20:11.770404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.658 [2024-07-13 22:20:11.774529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.658 [2024-07-13 22:20:11.783746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.658 [2024-07-13 22:20:11.784265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.658 [2024-07-13 22:20:11.784305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.658 [2024-07-13 22:20:11.784330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.658 [2024-07-13 22:20:11.784615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.658 [2024-07-13 22:20:11.784917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.658 [2024-07-13 22:20:11.784948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.658 [2024-07-13 22:20:11.784970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.658 [2024-07-13 22:20:11.789097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.658 [2024-07-13 22:20:11.798334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.658 [2024-07-13 22:20:11.798862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.658 [2024-07-13 22:20:11.798909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.658 [2024-07-13 22:20:11.798935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.658 [2024-07-13 22:20:11.799220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.658 [2024-07-13 22:20:11.799507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.658 [2024-07-13 22:20:11.799538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.658 [2024-07-13 22:20:11.799560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.658 [2024-07-13 22:20:11.803694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.658 [2024-07-13 22:20:11.812947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.658 [2024-07-13 22:20:11.813471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.658 [2024-07-13 22:20:11.813511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.658 [2024-07-13 22:20:11.813535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.658 [2024-07-13 22:20:11.813820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.658 [2024-07-13 22:20:11.814125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.658 [2024-07-13 22:20:11.814156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.658 [2024-07-13 22:20:11.814178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.658 [2024-07-13 22:20:11.818318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.658 [2024-07-13 22:20:11.827586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.658 [2024-07-13 22:20:11.828129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.658 [2024-07-13 22:20:11.828170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.658 [2024-07-13 22:20:11.828195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.658 [2024-07-13 22:20:11.828480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.658 [2024-07-13 22:20:11.828768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.658 [2024-07-13 22:20:11.828798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.658 [2024-07-13 22:20:11.828820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.658 [2024-07-13 22:20:11.832962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.658 [2024-07-13 22:20:11.842207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.658 [2024-07-13 22:20:11.842695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.658 [2024-07-13 22:20:11.842735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.658 [2024-07-13 22:20:11.842760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.658 [2024-07-13 22:20:11.843057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.658 [2024-07-13 22:20:11.843344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.658 [2024-07-13 22:20:11.843375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.658 [2024-07-13 22:20:11.843397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.658 [2024-07-13 22:20:11.847530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.658 [2024-07-13 22:20:11.856786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.658 [2024-07-13 22:20:11.857281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.658 [2024-07-13 22:20:11.857322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.658 [2024-07-13 22:20:11.857347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.658 [2024-07-13 22:20:11.857631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.658 [2024-07-13 22:20:11.857933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.658 [2024-07-13 22:20:11.857965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.658 [2024-07-13 22:20:11.857993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.658 [2024-07-13 22:20:11.862132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.658 [2024-07-13 22:20:11.871385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.658 [2024-07-13 22:20:11.871947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.658 [2024-07-13 22:20:11.872013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.658 [2024-07-13 22:20:11.872038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.658 [2024-07-13 22:20:11.872324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.658 [2024-07-13 22:20:11.872626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.658 [2024-07-13 22:20:11.872656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.658 [2024-07-13 22:20:11.872678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.658 [2024-07-13 22:20:11.876807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.658 [2024-07-13 22:20:11.885810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.658 [2024-07-13 22:20:11.886300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.658 [2024-07-13 22:20:11.886340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.658 [2024-07-13 22:20:11.886366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.658 [2024-07-13 22:20:11.886651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.658 [2024-07-13 22:20:11.886951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.658 [2024-07-13 22:20:11.886982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.658 [2024-07-13 22:20:11.887003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.658 [2024-07-13 22:20:11.891135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.658 [2024-07-13 22:20:11.900371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.658 [2024-07-13 22:20:11.900858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.658 [2024-07-13 22:20:11.900907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.658 [2024-07-13 22:20:11.900933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.658 [2024-07-13 22:20:11.901219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.658 [2024-07-13 22:20:11.901507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.658 [2024-07-13 22:20:11.901537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.658 [2024-07-13 22:20:11.901558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.658 [2024-07-13 22:20:11.905682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.658 [2024-07-13 22:20:11.914930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.658 [2024-07-13 22:20:11.915452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.658 [2024-07-13 22:20:11.915498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.659 [2024-07-13 22:20:11.915523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.659 [2024-07-13 22:20:11.915809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.659 [2024-07-13 22:20:11.916108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.659 [2024-07-13 22:20:11.916139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.659 [2024-07-13 22:20:11.916161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.659 [2024-07-13 22:20:11.920293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.659 [2024-07-13 22:20:11.929538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.659 [2024-07-13 22:20:11.930055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.659 [2024-07-13 22:20:11.930094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.659 [2024-07-13 22:20:11.930120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.659 [2024-07-13 22:20:11.930403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.659 [2024-07-13 22:20:11.930690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.659 [2024-07-13 22:20:11.930722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.659 [2024-07-13 22:20:11.930743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.659 [2024-07-13 22:20:11.934901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.659 [2024-07-13 22:20:11.944145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.659 [2024-07-13 22:20:11.944646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.659 [2024-07-13 22:20:11.944686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.659 [2024-07-13 22:20:11.944711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.659 [2024-07-13 22:20:11.945008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.659 [2024-07-13 22:20:11.945396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.659 [2024-07-13 22:20:11.945442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.659 [2024-07-13 22:20:11.945481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.659 [2024-07-13 22:20:11.950704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.659 [2024-07-13 22:20:11.958615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.659 [2024-07-13 22:20:11.959118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.659 [2024-07-13 22:20:11.959162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.659 [2024-07-13 22:20:11.959188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.659 [2024-07-13 22:20:11.959473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.659 [2024-07-13 22:20:11.959767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.659 [2024-07-13 22:20:11.959799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.659 [2024-07-13 22:20:11.959820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.659 [2024-07-13 22:20:11.963970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.659 [2024-07-13 22:20:11.973207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.659 [2024-07-13 22:20:11.973725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.659 [2024-07-13 22:20:11.973766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.659 [2024-07-13 22:20:11.973791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.659 [2024-07-13 22:20:11.974088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.659 [2024-07-13 22:20:11.974376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.659 [2024-07-13 22:20:11.974407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.659 [2024-07-13 22:20:11.974428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.659 [2024-07-13 22:20:11.978561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.659 [2024-07-13 22:20:11.987804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.659 [2024-07-13 22:20:11.988339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.659 [2024-07-13 22:20:11.988380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.659 [2024-07-13 22:20:11.988405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.659 [2024-07-13 22:20:11.988690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.659 [2024-07-13 22:20:11.988992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.659 [2024-07-13 22:20:11.989024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.659 [2024-07-13 22:20:11.989046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.659 [2024-07-13 22:20:11.993175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.659 [2024-07-13 22:20:12.002428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.659 [2024-07-13 22:20:12.002946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.659 [2024-07-13 22:20:12.002987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.659 [2024-07-13 22:20:12.003012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.659 [2024-07-13 22:20:12.003295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.659 [2024-07-13 22:20:12.003582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.659 [2024-07-13 22:20:12.003612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.659 [2024-07-13 22:20:12.003640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.659 [2024-07-13 22:20:12.007779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.659 [2024-07-13 22:20:12.017031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.659 [2024-07-13 22:20:12.017549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.659 [2024-07-13 22:20:12.017590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.659 [2024-07-13 22:20:12.017615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.659 [2024-07-13 22:20:12.017912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.659 [2024-07-13 22:20:12.018199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.659 [2024-07-13 22:20:12.018230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.659 [2024-07-13 22:20:12.018251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.659 [2024-07-13 22:20:12.022397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.659 [2024-07-13 22:20:12.031661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.659 [2024-07-13 22:20:12.032173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.659 [2024-07-13 22:20:12.032213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.659 [2024-07-13 22:20:12.032238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.659 [2024-07-13 22:20:12.032523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.659 [2024-07-13 22:20:12.032811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.659 [2024-07-13 22:20:12.032841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.659 [2024-07-13 22:20:12.032863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.659 [2024-07-13 22:20:12.037098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.921 [2024-07-13 22:20:12.046376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.921 [2024-07-13 22:20:12.046903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.921 [2024-07-13 22:20:12.046946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.921 [2024-07-13 22:20:12.046972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.921 [2024-07-13 22:20:12.047257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.921 [2024-07-13 22:20:12.047546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.921 [2024-07-13 22:20:12.047577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.921 [2024-07-13 22:20:12.047599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.921 [2024-07-13 22:20:12.051735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.921 [2024-07-13 22:20:12.060863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.921 [2024-07-13 22:20:12.061410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.921 [2024-07-13 22:20:12.061451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.921 [2024-07-13 22:20:12.061476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.921 [2024-07-13 22:20:12.061761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.921 [2024-07-13 22:20:12.062061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.921 [2024-07-13 22:20:12.062093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.921 [2024-07-13 22:20:12.062115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.921 [2024-07-13 22:20:12.066241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.921 [2024-07-13 22:20:12.075469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.921 [2024-07-13 22:20:12.075968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.921 [2024-07-13 22:20:12.076009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.921 [2024-07-13 22:20:12.076034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.921 [2024-07-13 22:20:12.076319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.921 [2024-07-13 22:20:12.076606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.921 [2024-07-13 22:20:12.076637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.921 [2024-07-13 22:20:12.076673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.921 [2024-07-13 22:20:12.080807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.921 [2024-07-13 22:20:12.090064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.921 [2024-07-13 22:20:12.090576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.921 [2024-07-13 22:20:12.090617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.921 [2024-07-13 22:20:12.090642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.921 [2024-07-13 22:20:12.090941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.921 [2024-07-13 22:20:12.091230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.921 [2024-07-13 22:20:12.091261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.921 [2024-07-13 22:20:12.091282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.921 [2024-07-13 22:20:12.095423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.921 [2024-07-13 22:20:12.104668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.921 [2024-07-13 22:20:12.105202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.921 [2024-07-13 22:20:12.105244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.921 [2024-07-13 22:20:12.105270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.921 [2024-07-13 22:20:12.105560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.921 [2024-07-13 22:20:12.105850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.921 [2024-07-13 22:20:12.105891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.921 [2024-07-13 22:20:12.105913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.921 [2024-07-13 22:20:12.110054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.922 [2024-07-13 22:20:12.119333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.922 [2024-07-13 22:20:12.119855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.922 [2024-07-13 22:20:12.119905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.922 [2024-07-13 22:20:12.119931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.922 [2024-07-13 22:20:12.120215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.922 [2024-07-13 22:20:12.120503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.922 [2024-07-13 22:20:12.120535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.922 [2024-07-13 22:20:12.120556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.922 [2024-07-13 22:20:12.124711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.922 [2024-07-13 22:20:12.133986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.922 [2024-07-13 22:20:12.134660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.922 [2024-07-13 22:20:12.134716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.922 [2024-07-13 22:20:12.134742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.922 [2024-07-13 22:20:12.135040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.922 [2024-07-13 22:20:12.135328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.922 [2024-07-13 22:20:12.135359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.922 [2024-07-13 22:20:12.135380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.922 [2024-07-13 22:20:12.139524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.922 [2024-07-13 22:20:12.148545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.922 [2024-07-13 22:20:12.149095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.922 [2024-07-13 22:20:12.149145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.922 [2024-07-13 22:20:12.149171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.922 [2024-07-13 22:20:12.149457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.922 [2024-07-13 22:20:12.149745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.922 [2024-07-13 22:20:12.149776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.922 [2024-07-13 22:20:12.149803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.922 [2024-07-13 22:20:12.153953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.922 [2024-07-13 22:20:12.162989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.922 [2024-07-13 22:20:12.163475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.922 [2024-07-13 22:20:12.163515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.922 [2024-07-13 22:20:12.163541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.922 [2024-07-13 22:20:12.163827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.922 [2024-07-13 22:20:12.164126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.922 [2024-07-13 22:20:12.164158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.922 [2024-07-13 22:20:12.164180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.922 [2024-07-13 22:20:12.168331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.922 [2024-07-13 22:20:12.177604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.922 [2024-07-13 22:20:12.178122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.922 [2024-07-13 22:20:12.178162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.922 [2024-07-13 22:20:12.178187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.922 [2024-07-13 22:20:12.178472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.922 [2024-07-13 22:20:12.178759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.922 [2024-07-13 22:20:12.178790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.922 [2024-07-13 22:20:12.178812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.922 [2024-07-13 22:20:12.182969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.922 [2024-07-13 22:20:12.192232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.922 [2024-07-13 22:20:12.192719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.922 [2024-07-13 22:20:12.192759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.922 [2024-07-13 22:20:12.192783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.922 [2024-07-13 22:20:12.193080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.922 [2024-07-13 22:20:12.193367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.922 [2024-07-13 22:20:12.193397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.922 [2024-07-13 22:20:12.193419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.922 [2024-07-13 22:20:12.197558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.922 [2024-07-13 22:20:12.207339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.922 [2024-07-13 22:20:12.207887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.922 [2024-07-13 22:20:12.207942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.922 [2024-07-13 22:20:12.207969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.922 [2024-07-13 22:20:12.208255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.922 [2024-07-13 22:20:12.208545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.922 [2024-07-13 22:20:12.208576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.922 [2024-07-13 22:20:12.208597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.922 [2024-07-13 22:20:12.212740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.922 [2024-07-13 22:20:12.221814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.922 [2024-07-13 22:20:12.222351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.922 [2024-07-13 22:20:12.222393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.922 [2024-07-13 22:20:12.222419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.922 [2024-07-13 22:20:12.222703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.922 [2024-07-13 22:20:12.223004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.922 [2024-07-13 22:20:12.223036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.922 [2024-07-13 22:20:12.223058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.922 [2024-07-13 22:20:12.227182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.922 [2024-07-13 22:20:12.236420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.923 [2024-07-13 22:20:12.236948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.923 [2024-07-13 22:20:12.236989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.923 [2024-07-13 22:20:12.237014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.923 [2024-07-13 22:20:12.237300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.923 [2024-07-13 22:20:12.237588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.923 [2024-07-13 22:20:12.237619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.923 [2024-07-13 22:20:12.237642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.923 [2024-07-13 22:20:12.241795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.923 [2024-07-13 22:20:12.251058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.923 [2024-07-13 22:20:12.251726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.923 [2024-07-13 22:20:12.251785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.923 [2024-07-13 22:20:12.251810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.923 [2024-07-13 22:20:12.252113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.923 [2024-07-13 22:20:12.252402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.923 [2024-07-13 22:20:12.252433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.923 [2024-07-13 22:20:12.252454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.923 [2024-07-13 22:20:12.256585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.923 [2024-07-13 22:20:12.265582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.923 [2024-07-13 22:20:12.266115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.923 [2024-07-13 22:20:12.266155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.923 [2024-07-13 22:20:12.266181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.923 [2024-07-13 22:20:12.266465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.923 [2024-07-13 22:20:12.266752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.923 [2024-07-13 22:20:12.266783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.923 [2024-07-13 22:20:12.266805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.923 [2024-07-13 22:20:12.270948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.923 [2024-07-13 22:20:12.280210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.923 [2024-07-13 22:20:12.280695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.923 [2024-07-13 22:20:12.280735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.923 [2024-07-13 22:20:12.280760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.923 [2024-07-13 22:20:12.281058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.923 [2024-07-13 22:20:12.281346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.923 [2024-07-13 22:20:12.281377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.923 [2024-07-13 22:20:12.281398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.923 [2024-07-13 22:20:12.285539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.923 [2024-07-13 22:20:12.294828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.923 [2024-07-13 22:20:12.295368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.923 [2024-07-13 22:20:12.295408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.923 [2024-07-13 22:20:12.295434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.923 [2024-07-13 22:20:12.295718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.923 [2024-07-13 22:20:12.296020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.923 [2024-07-13 22:20:12.296052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.923 [2024-07-13 22:20:12.296079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:52.923 [2024-07-13 22:20:12.300245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:52.923 [2024-07-13 22:20:12.309312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:52.923 [2024-07-13 22:20:12.309916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.923 [2024-07-13 22:20:12.309971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:52.923 [2024-07-13 22:20:12.310000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:52.923 [2024-07-13 22:20:12.310286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:52.923 [2024-07-13 22:20:12.310574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:52.923 [2024-07-13 22:20:12.310605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:52.923 [2024-07-13 22:20:12.310627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.185 [2024-07-13 22:20:12.314974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.185 [2024-07-13 22:20:12.323871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.185 [2024-07-13 22:20:12.324392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.185 [2024-07-13 22:20:12.324433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.185 [2024-07-13 22:20:12.324458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.185 [2024-07-13 22:20:12.324743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.185 [2024-07-13 22:20:12.325042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.185 [2024-07-13 22:20:12.325073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.185 [2024-07-13 22:20:12.325095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.185 [2024-07-13 22:20:12.329221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.185 [2024-07-13 22:20:12.338469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.185 [2024-07-13 22:20:12.338996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.185 [2024-07-13 22:20:12.339037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.185 [2024-07-13 22:20:12.339063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.185 [2024-07-13 22:20:12.339348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.185 [2024-07-13 22:20:12.339637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.185 [2024-07-13 22:20:12.339667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.185 [2024-07-13 22:20:12.339689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.185 [2024-07-13 22:20:12.343837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.185 [2024-07-13 22:20:12.353089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.185 [2024-07-13 22:20:12.353732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.185 [2024-07-13 22:20:12.353793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.185 [2024-07-13 22:20:12.353818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.185 [2024-07-13 22:20:12.354116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.185 [2024-07-13 22:20:12.354405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.185 [2024-07-13 22:20:12.354435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.185 [2024-07-13 22:20:12.354456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.185 [2024-07-13 22:20:12.358583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.185 [2024-07-13 22:20:12.367598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.185 [2024-07-13 22:20:12.368148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.185 [2024-07-13 22:20:12.368189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.185 [2024-07-13 22:20:12.368215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.185 [2024-07-13 22:20:12.368499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.185 [2024-07-13 22:20:12.368787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.185 [2024-07-13 22:20:12.368818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.185 [2024-07-13 22:20:12.368839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.185 [2024-07-13 22:20:12.372981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.185 [2024-07-13 22:20:12.382218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.185 [2024-07-13 22:20:12.382744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.185 [2024-07-13 22:20:12.382785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.185 [2024-07-13 22:20:12.382810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.185 [2024-07-13 22:20:12.383108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.185 [2024-07-13 22:20:12.383399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.185 [2024-07-13 22:20:12.383430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.185 [2024-07-13 22:20:12.383452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.185 [2024-07-13 22:20:12.387590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.185 [2024-07-13 22:20:12.396841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.185 [2024-07-13 22:20:12.397361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.185 [2024-07-13 22:20:12.397402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.185 [2024-07-13 22:20:12.397427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.185 [2024-07-13 22:20:12.397717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.185 [2024-07-13 22:20:12.398018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.185 [2024-07-13 22:20:12.398049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.185 [2024-07-13 22:20:12.398070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.186 [2024-07-13 22:20:12.402200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.186 [2024-07-13 22:20:12.411441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.186 [2024-07-13 22:20:12.411967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.186 [2024-07-13 22:20:12.412009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.186 [2024-07-13 22:20:12.412034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.186 [2024-07-13 22:20:12.412320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.186 [2024-07-13 22:20:12.412607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.186 [2024-07-13 22:20:12.412637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.186 [2024-07-13 22:20:12.412659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.186 [2024-07-13 22:20:12.416794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.186 [2024-07-13 22:20:12.426052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.186 [2024-07-13 22:20:12.426567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.186 [2024-07-13 22:20:12.426607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.186 [2024-07-13 22:20:12.426632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.186 [2024-07-13 22:20:12.426929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.186 [2024-07-13 22:20:12.427217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.186 [2024-07-13 22:20:12.427247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.186 [2024-07-13 22:20:12.427269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.186 [2024-07-13 22:20:12.431401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.186 [2024-07-13 22:20:12.440647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.186 [2024-07-13 22:20:12.441186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.186 [2024-07-13 22:20:12.441227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.186 [2024-07-13 22:20:12.441252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.186 [2024-07-13 22:20:12.441537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.186 [2024-07-13 22:20:12.441825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.186 [2024-07-13 22:20:12.441856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.186 [2024-07-13 22:20:12.441896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.186 [2024-07-13 22:20:12.446055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.186 [2024-07-13 22:20:12.455197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.186 [2024-07-13 22:20:12.455723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.186 [2024-07-13 22:20:12.455763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.186 [2024-07-13 22:20:12.455789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.186 [2024-07-13 22:20:12.456119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.186 [2024-07-13 22:20:12.456521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.186 [2024-07-13 22:20:12.456566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.186 [2024-07-13 22:20:12.456597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.186 [2024-07-13 22:20:12.461786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.186 [2024-07-13 22:20:12.469771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.186 [2024-07-13 22:20:12.470293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.186 [2024-07-13 22:20:12.470336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.186 [2024-07-13 22:20:12.470362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.186 [2024-07-13 22:20:12.470648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.186 [2024-07-13 22:20:12.470948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.186 [2024-07-13 22:20:12.470980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.186 [2024-07-13 22:20:12.471001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.186 [2024-07-13 22:20:12.475124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.186 [2024-07-13 22:20:12.484361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.186 [2024-07-13 22:20:12.484848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.186 [2024-07-13 22:20:12.484897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.186 [2024-07-13 22:20:12.484923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.186 [2024-07-13 22:20:12.485208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.186 [2024-07-13 22:20:12.485496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.186 [2024-07-13 22:20:12.485527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.186 [2024-07-13 22:20:12.485548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.186 [2024-07-13 22:20:12.489679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.186 [2024-07-13 22:20:12.498921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.186 [2024-07-13 22:20:12.499446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.186 [2024-07-13 22:20:12.499522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.186 [2024-07-13 22:20:12.499548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.186 [2024-07-13 22:20:12.499834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.186 [2024-07-13 22:20:12.500135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.186 [2024-07-13 22:20:12.500166] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.186 [2024-07-13 22:20:12.500188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.186 [2024-07-13 22:20:12.504323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.186 [2024-07-13 22:20:12.513551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.186 [2024-07-13 22:20:12.514049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.186 [2024-07-13 22:20:12.514090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.186 [2024-07-13 22:20:12.514115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.186 [2024-07-13 22:20:12.514400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.186 [2024-07-13 22:20:12.514688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.186 [2024-07-13 22:20:12.514718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.186 [2024-07-13 22:20:12.514741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.186 [2024-07-13 22:20:12.518889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.186 [2024-07-13 22:20:12.528135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.187 [2024-07-13 22:20:12.528670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.187 [2024-07-13 22:20:12.528710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.187 [2024-07-13 22:20:12.528735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.187 [2024-07-13 22:20:12.529033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.187 [2024-07-13 22:20:12.529321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.187 [2024-07-13 22:20:12.529351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.187 [2024-07-13 22:20:12.529372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.187 [2024-07-13 22:20:12.533507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.187 [2024-07-13 22:20:12.542749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.187 [2024-07-13 22:20:12.543274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.187 [2024-07-13 22:20:12.543315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.187 [2024-07-13 22:20:12.543340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.187 [2024-07-13 22:20:12.543630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.187 [2024-07-13 22:20:12.543931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.187 [2024-07-13 22:20:12.543963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.187 [2024-07-13 22:20:12.543984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.187 [2024-07-13 22:20:12.548122] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.187 [2024-07-13 22:20:12.557377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.187 [2024-07-13 22:20:12.557888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.187 [2024-07-13 22:20:12.557929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.187 [2024-07-13 22:20:12.557954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.187 [2024-07-13 22:20:12.558239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.187 [2024-07-13 22:20:12.558534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.187 [2024-07-13 22:20:12.558565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.187 [2024-07-13 22:20:12.558586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.187 [2024-07-13 22:20:12.562725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.187 [2024-07-13 22:20:12.572064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.187 [2024-07-13 22:20:12.572595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.187 [2024-07-13 22:20:12.572636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.187 [2024-07-13 22:20:12.572661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.187 [2024-07-13 22:20:12.572961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.187 [2024-07-13 22:20:12.573249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.187 [2024-07-13 22:20:12.573280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.187 [2024-07-13 22:20:12.573301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.187 [2024-07-13 22:20:12.577595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.448 [2024-07-13 22:20:12.586743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.448 [2024-07-13 22:20:12.587276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.448 [2024-07-13 22:20:12.587318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.448 [2024-07-13 22:20:12.587343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.448 [2024-07-13 22:20:12.587627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.448 [2024-07-13 22:20:12.587929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.448 [2024-07-13 22:20:12.587967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.448 [2024-07-13 22:20:12.587991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.448 [2024-07-13 22:20:12.592112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.448 [2024-07-13 22:20:12.601339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.448 [2024-07-13 22:20:12.601853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.448 [2024-07-13 22:20:12.601901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.448 [2024-07-13 22:20:12.601927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.448 [2024-07-13 22:20:12.602213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.448 [2024-07-13 22:20:12.602501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.448 [2024-07-13 22:20:12.602531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.448 [2024-07-13 22:20:12.602553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.448 [2024-07-13 22:20:12.606695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.448 [2024-07-13 22:20:12.615958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.448 [2024-07-13 22:20:12.616481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.448 [2024-07-13 22:20:12.616521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.448 [2024-07-13 22:20:12.616546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.448 [2024-07-13 22:20:12.616832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.448 [2024-07-13 22:20:12.617131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.448 [2024-07-13 22:20:12.617162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.448 [2024-07-13 22:20:12.617184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.448 [2024-07-13 22:20:12.621319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.448 [2024-07-13 22:20:12.630597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.448 [2024-07-13 22:20:12.631121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.448 [2024-07-13 22:20:12.631161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.448 [2024-07-13 22:20:12.631187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.448 [2024-07-13 22:20:12.631471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.448 [2024-07-13 22:20:12.631760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.448 [2024-07-13 22:20:12.631790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.448 [2024-07-13 22:20:12.631811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.448 [2024-07-13 22:20:12.635953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.448 [2024-07-13 22:20:12.645212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.448 [2024-07-13 22:20:12.645904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.448 [2024-07-13 22:20:12.645964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.448 [2024-07-13 22:20:12.645989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.448 [2024-07-13 22:20:12.646274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.448 [2024-07-13 22:20:12.646562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.448 [2024-07-13 22:20:12.646592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.448 [2024-07-13 22:20:12.646614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.448 [2024-07-13 22:20:12.650742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.448 [2024-07-13 22:20:12.659744] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.448 [2024-07-13 22:20:12.660276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.448 [2024-07-13 22:20:12.660315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.448 [2024-07-13 22:20:12.660340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.448 [2024-07-13 22:20:12.660624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.448 [2024-07-13 22:20:12.660924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.448 [2024-07-13 22:20:12.660956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.448 [2024-07-13 22:20:12.660978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.448 [2024-07-13 22:20:12.665124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.448 [2024-07-13 22:20:12.674382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.448 [2024-07-13 22:20:12.674897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.449 [2024-07-13 22:20:12.674947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.449 [2024-07-13 22:20:12.674972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.449 [2024-07-13 22:20:12.675257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.449 [2024-07-13 22:20:12.675544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.449 [2024-07-13 22:20:12.675575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.449 [2024-07-13 22:20:12.675597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.449 [2024-07-13 22:20:12.679734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.449 [2024-07-13 22:20:12.689004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.449 [2024-07-13 22:20:12.689507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.449 [2024-07-13 22:20:12.689547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.449 [2024-07-13 22:20:12.689572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.449 [2024-07-13 22:20:12.689864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.449 [2024-07-13 22:20:12.690163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.449 [2024-07-13 22:20:12.690193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.449 [2024-07-13 22:20:12.690215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.449 [2024-07-13 22:20:12.694339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.449 [2024-07-13 22:20:12.703718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.449 [2024-07-13 22:20:12.704243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.449 [2024-07-13 22:20:12.704285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.449 [2024-07-13 22:20:12.704310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.449 [2024-07-13 22:20:12.704613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.449 [2024-07-13 22:20:12.704915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.449 [2024-07-13 22:20:12.704947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.449 [2024-07-13 22:20:12.704969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.449 [2024-07-13 22:20:12.709099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.449 [2024-07-13 22:20:12.718530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.449 [2024-07-13 22:20:12.719061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.449 [2024-07-13 22:20:12.719105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.449 [2024-07-13 22:20:12.719131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.449 [2024-07-13 22:20:12.719417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.449 [2024-07-13 22:20:12.719706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.449 [2024-07-13 22:20:12.719736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.449 [2024-07-13 22:20:12.719758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.449 [2024-07-13 22:20:12.723908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.449 [2024-07-13 22:20:12.733142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.449 [2024-07-13 22:20:12.733642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.449 [2024-07-13 22:20:12.733683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.449 [2024-07-13 22:20:12.733709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.449 [2024-07-13 22:20:12.734008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.449 [2024-07-13 22:20:12.734298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.449 [2024-07-13 22:20:12.734334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.449 [2024-07-13 22:20:12.734357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.449 [2024-07-13 22:20:12.738493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.449 [2024-07-13 22:20:12.747730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.449 [2024-07-13 22:20:12.748258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.449 [2024-07-13 22:20:12.748299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.449 [2024-07-13 22:20:12.748324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.449 [2024-07-13 22:20:12.748611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.449 [2024-07-13 22:20:12.748913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.449 [2024-07-13 22:20:12.748945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.449 [2024-07-13 22:20:12.748966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.449 [2024-07-13 22:20:12.753107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.449 [2024-07-13 22:20:12.762380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.449 [2024-07-13 22:20:12.762898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.449 [2024-07-13 22:20:12.762940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.449 [2024-07-13 22:20:12.762965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.449 [2024-07-13 22:20:12.763249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.449 [2024-07-13 22:20:12.763538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.449 [2024-07-13 22:20:12.763579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.449 [2024-07-13 22:20:12.763600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.449 [2024-07-13 22:20:12.767737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.449 [2024-07-13 22:20:12.776981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.449 [2024-07-13 22:20:12.777491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.449 [2024-07-13 22:20:12.777532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.449 [2024-07-13 22:20:12.777557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.449 [2024-07-13 22:20:12.777843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.449 [2024-07-13 22:20:12.778141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.449 [2024-07-13 22:20:12.778172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.449 [2024-07-13 22:20:12.778194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.449 [2024-07-13 22:20:12.782323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.449 [2024-07-13 22:20:12.791607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.449 [2024-07-13 22:20:12.792097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.449 [2024-07-13 22:20:12.792137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.449 [2024-07-13 22:20:12.792163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.450 [2024-07-13 22:20:12.792448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.450 [2024-07-13 22:20:12.792736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.450 [2024-07-13 22:20:12.792767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.450 [2024-07-13 22:20:12.792789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.450 [2024-07-13 22:20:12.796934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.450 [2024-07-13 22:20:12.806181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.450 [2024-07-13 22:20:12.806711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.450 [2024-07-13 22:20:12.806751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.450 [2024-07-13 22:20:12.806776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.450 [2024-07-13 22:20:12.807074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.450 [2024-07-13 22:20:12.807364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.450 [2024-07-13 22:20:12.807394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.450 [2024-07-13 22:20:12.807415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.450 [2024-07-13 22:20:12.811543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.450 [2024-07-13 22:20:12.820795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.450 [2024-07-13 22:20:12.821321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.450 [2024-07-13 22:20:12.821361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.450 [2024-07-13 22:20:12.821386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.450 [2024-07-13 22:20:12.821671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.450 [2024-07-13 22:20:12.821974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.450 [2024-07-13 22:20:12.822006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.450 [2024-07-13 22:20:12.822027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.450 [2024-07-13 22:20:12.826169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.450 [2024-07-13 22:20:12.835438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.450 [2024-07-13 22:20:12.835961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.450 [2024-07-13 22:20:12.836003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.450 [2024-07-13 22:20:12.836034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.450 [2024-07-13 22:20:12.836321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.450 [2024-07-13 22:20:12.836609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.450 [2024-07-13 22:20:12.836640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.450 [2024-07-13 22:20:12.836662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.711 [2024-07-13 22:20:12.840971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.711 [2024-07-13 22:20:12.850126] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.711 [2024-07-13 22:20:12.850702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.711 [2024-07-13 22:20:12.850762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.711 [2024-07-13 22:20:12.850788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.711 [2024-07-13 22:20:12.851083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.711 [2024-07-13 22:20:12.851372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.711 [2024-07-13 22:20:12.851404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.711 [2024-07-13 22:20:12.851425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.711 [2024-07-13 22:20:12.855573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.711 [2024-07-13 22:20:12.864593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.711 [2024-07-13 22:20:12.865111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.712 [2024-07-13 22:20:12.865151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.712 [2024-07-13 22:20:12.865176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.712 [2024-07-13 22:20:12.865462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.712 [2024-07-13 22:20:12.865751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.712 [2024-07-13 22:20:12.865781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.712 [2024-07-13 22:20:12.865803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.712 [2024-07-13 22:20:12.869947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.712 [2024-07-13 22:20:12.879223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.712 [2024-07-13 22:20:12.879723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.712 [2024-07-13 22:20:12.879764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.712 [2024-07-13 22:20:12.879789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.712 [2024-07-13 22:20:12.880093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.712 [2024-07-13 22:20:12.880382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.712 [2024-07-13 22:20:12.880418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.712 [2024-07-13 22:20:12.880440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.712 [2024-07-13 22:20:12.884592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.712 [2024-07-13 22:20:12.893894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.712 [2024-07-13 22:20:12.894380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.712 [2024-07-13 22:20:12.894422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.712 [2024-07-13 22:20:12.894447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.712 [2024-07-13 22:20:12.894735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.712 [2024-07-13 22:20:12.895037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.712 [2024-07-13 22:20:12.895069] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.712 [2024-07-13 22:20:12.895091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.712 [2024-07-13 22:20:12.899247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.712 [2024-07-13 22:20:12.908498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.712 [2024-07-13 22:20:12.909018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.712 [2024-07-13 22:20:12.909058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.712 [2024-07-13 22:20:12.909084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.712 [2024-07-13 22:20:12.909370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.712 [2024-07-13 22:20:12.909660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.712 [2024-07-13 22:20:12.909703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.712 [2024-07-13 22:20:12.909725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.712 [2024-07-13 22:20:12.913879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.712 [2024-07-13 22:20:12.923162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.712 [2024-07-13 22:20:12.923685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.712 [2024-07-13 22:20:12.923725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.712 [2024-07-13 22:20:12.923750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.712 [2024-07-13 22:20:12.924049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.712 [2024-07-13 22:20:12.924337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.712 [2024-07-13 22:20:12.924368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.712 [2024-07-13 22:20:12.924390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.712 [2024-07-13 22:20:12.928539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.712 [2024-07-13 22:20:12.937820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.712 [2024-07-13 22:20:12.938348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.712 [2024-07-13 22:20:12.938388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.712 [2024-07-13 22:20:12.938413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.712 [2024-07-13 22:20:12.938700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.712 [2024-07-13 22:20:12.939002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.712 [2024-07-13 22:20:12.939034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.712 [2024-07-13 22:20:12.939056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.712 [2024-07-13 22:20:12.943201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.712 [2024-07-13 22:20:12.952482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.712 [2024-07-13 22:20:12.952970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.712 [2024-07-13 22:20:12.953011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.712 [2024-07-13 22:20:12.953036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.712 [2024-07-13 22:20:12.953322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.712 [2024-07-13 22:20:12.953611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.712 [2024-07-13 22:20:12.953642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.712 [2024-07-13 22:20:12.953664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.712 [2024-07-13 22:20:12.957822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.712 [2024-07-13 22:20:12.967101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.712 [2024-07-13 22:20:12.967611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.712 [2024-07-13 22:20:12.967652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.712 [2024-07-13 22:20:12.967678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.712 [2024-07-13 22:20:12.967975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.712 [2024-07-13 22:20:12.968367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.712 [2024-07-13 22:20:12.968412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.712 [2024-07-13 22:20:12.968452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.712 [2024-07-13 22:20:12.973678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.712 [2024-07-13 22:20:12.981746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.712 [2024-07-13 22:20:12.982276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.712 [2024-07-13 22:20:12.982319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.712 [2024-07-13 22:20:12.982352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.713 [2024-07-13 22:20:12.982640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.713 [2024-07-13 22:20:12.982940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.713 [2024-07-13 22:20:12.982972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.713 [2024-07-13 22:20:12.982994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.713 [2024-07-13 22:20:12.987139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.713 [2024-07-13 22:20:12.996419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.713 [2024-07-13 22:20:12.996991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.713 [2024-07-13 22:20:12.997032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.713 [2024-07-13 22:20:12.997059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.713 [2024-07-13 22:20:12.997347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.713 [2024-07-13 22:20:12.997636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.713 [2024-07-13 22:20:12.997667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.713 [2024-07-13 22:20:12.997689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.713 [2024-07-13 22:20:13.001846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.713 [2024-07-13 22:20:13.010909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.713 [2024-07-13 22:20:13.011423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.713 [2024-07-13 22:20:13.011465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.713 [2024-07-13 22:20:13.011490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.713 [2024-07-13 22:20:13.011779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.713 [2024-07-13 22:20:13.012081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.713 [2024-07-13 22:20:13.012112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.713 [2024-07-13 22:20:13.012135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.713 [2024-07-13 22:20:13.016295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.713 [2024-07-13 22:20:13.025365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.713 [2024-07-13 22:20:13.025897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.713 [2024-07-13 22:20:13.025938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.713 [2024-07-13 22:20:13.025963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.713 [2024-07-13 22:20:13.026250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.713 [2024-07-13 22:20:13.026547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.713 [2024-07-13 22:20:13.026583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.713 [2024-07-13 22:20:13.026606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.713 [2024-07-13 22:20:13.030770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.713 [2024-07-13 22:20:13.039809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.713 [2024-07-13 22:20:13.040342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.713 [2024-07-13 22:20:13.040382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.713 [2024-07-13 22:20:13.040408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.713 [2024-07-13 22:20:13.040695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.713 [2024-07-13 22:20:13.040999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.713 [2024-07-13 22:20:13.041031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.713 [2024-07-13 22:20:13.041053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.713 [2024-07-13 22:20:13.045205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.713 [2024-07-13 22:20:13.054257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.713 [2024-07-13 22:20:13.054774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.713 [2024-07-13 22:20:13.054814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.713 [2024-07-13 22:20:13.054840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.713 [2024-07-13 22:20:13.055135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.713 [2024-07-13 22:20:13.055425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.713 [2024-07-13 22:20:13.055456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.713 [2024-07-13 22:20:13.055477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.713 [2024-07-13 22:20:13.059628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.713 [2024-07-13 22:20:13.068937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.713 [2024-07-13 22:20:13.069451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.713 [2024-07-13 22:20:13.069491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.713 [2024-07-13 22:20:13.069516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.713 [2024-07-13 22:20:13.069804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.713 [2024-07-13 22:20:13.070105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.713 [2024-07-13 22:20:13.070136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.713 [2024-07-13 22:20:13.070158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.713 [2024-07-13 22:20:13.074318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.713 [2024-07-13 22:20:13.083399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.713 [2024-07-13 22:20:13.083929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.713 [2024-07-13 22:20:13.083969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.713 [2024-07-13 22:20:13.083995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.713 [2024-07-13 22:20:13.084283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.713 [2024-07-13 22:20:13.084575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.713 [2024-07-13 22:20:13.084606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.713 [2024-07-13 22:20:13.084628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.713 [2024-07-13 22:20:13.088799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.713 [2024-07-13 22:20:13.097851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.713 [2024-07-13 22:20:13.098371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.713 [2024-07-13 22:20:13.098427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.713 [2024-07-13 22:20:13.098458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.713 [2024-07-13 22:20:13.098789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.714 [2024-07-13 22:20:13.099094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.714 [2024-07-13 22:20:13.099127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.714 [2024-07-13 22:20:13.099149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.714 [2024-07-13 22:20:13.103464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.974 [2024-07-13 22:20:13.112436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.974 [2024-07-13 22:20:13.112950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-07-13 22:20:13.112992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.974 [2024-07-13 22:20:13.113018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.974 [2024-07-13 22:20:13.113306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.974 [2024-07-13 22:20:13.113596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.974 [2024-07-13 22:20:13.113626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.974 [2024-07-13 22:20:13.113648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.974 [2024-07-13 22:20:13.117823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.974 [2024-07-13 22:20:13.126891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.974 [2024-07-13 22:20:13.127415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-07-13 22:20:13.127455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.974 [2024-07-13 22:20:13.127487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.974 [2024-07-13 22:20:13.127776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.974 [2024-07-13 22:20:13.128079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.974 [2024-07-13 22:20:13.128110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.974 [2024-07-13 22:20:13.128132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.974 [2024-07-13 22:20:13.132287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.974 [2024-07-13 22:20:13.141356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.974 [2024-07-13 22:20:13.141847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-07-13 22:20:13.141895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.974 [2024-07-13 22:20:13.141922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.974 [2024-07-13 22:20:13.142209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.974 [2024-07-13 22:20:13.142499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.974 [2024-07-13 22:20:13.142529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.974 [2024-07-13 22:20:13.142551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.974 [2024-07-13 22:20:13.146715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.974 [2024-07-13 22:20:13.156038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.974 [2024-07-13 22:20:13.156541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-07-13 22:20:13.156582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.974 [2024-07-13 22:20:13.156608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.974 [2024-07-13 22:20:13.156905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.974 [2024-07-13 22:20:13.157195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.974 [2024-07-13 22:20:13.157226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.974 [2024-07-13 22:20:13.157248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.974 [2024-07-13 22:20:13.161398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.974 [2024-07-13 22:20:13.170696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.974 [2024-07-13 22:20:13.171228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-07-13 22:20:13.171268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.974 [2024-07-13 22:20:13.171293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.974 [2024-07-13 22:20:13.171578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.974 [2024-07-13 22:20:13.171888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.974 [2024-07-13 22:20:13.171919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.974 [2024-07-13 22:20:13.171942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.974 [2024-07-13 22:20:13.176093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.974 [2024-07-13 22:20:13.185142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.974 [2024-07-13 22:20:13.185682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.974 [2024-07-13 22:20:13.185722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.974 [2024-07-13 22:20:13.185748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.974 [2024-07-13 22:20:13.186046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.974 [2024-07-13 22:20:13.186336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.974 [2024-07-13 22:20:13.186367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.975 [2024-07-13 22:20:13.186388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.975 [2024-07-13 22:20:13.190550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.975 [2024-07-13 22:20:13.199602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.975 [2024-07-13 22:20:13.200159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-07-13 22:20:13.200200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.975 [2024-07-13 22:20:13.200226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.975 [2024-07-13 22:20:13.200512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.975 [2024-07-13 22:20:13.200801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.975 [2024-07-13 22:20:13.200832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.975 [2024-07-13 22:20:13.200854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.975 [2024-07-13 22:20:13.205040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.975 [2024-07-13 22:20:13.214119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.975 [2024-07-13 22:20:13.214632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-07-13 22:20:13.214672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.975 [2024-07-13 22:20:13.214697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.975 [2024-07-13 22:20:13.214996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.975 [2024-07-13 22:20:13.215287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.975 [2024-07-13 22:20:13.215318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.975 [2024-07-13 22:20:13.215340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.975 [2024-07-13 22:20:13.219512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.975 [2024-07-13 22:20:13.229036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.975 [2024-07-13 22:20:13.229551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-07-13 22:20:13.229594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.975 [2024-07-13 22:20:13.229621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.975 [2024-07-13 22:20:13.229930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.975 [2024-07-13 22:20:13.230230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.975 [2024-07-13 22:20:13.230262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.975 [2024-07-13 22:20:13.230283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.975 [2024-07-13 22:20:13.234447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.975 [2024-07-13 22:20:13.243524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.975 [2024-07-13 22:20:13.244033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-07-13 22:20:13.244074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.975 [2024-07-13 22:20:13.244099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.975 [2024-07-13 22:20:13.244387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.975 [2024-07-13 22:20:13.244675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.975 [2024-07-13 22:20:13.244706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.975 [2024-07-13 22:20:13.244729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.975 [2024-07-13 22:20:13.248891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.975 [2024-07-13 22:20:13.257993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.975 [2024-07-13 22:20:13.258520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-07-13 22:20:13.258560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.975 [2024-07-13 22:20:13.258585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.975 [2024-07-13 22:20:13.258880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.975 [2024-07-13 22:20:13.259169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.975 [2024-07-13 22:20:13.259200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.975 [2024-07-13 22:20:13.259230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.975 [2024-07-13 22:20:13.263417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.975 [2024-07-13 22:20:13.272507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.975 [2024-07-13 22:20:13.273008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-07-13 22:20:13.273049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.975 [2024-07-13 22:20:13.273081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.975 [2024-07-13 22:20:13.273367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.975 [2024-07-13 22:20:13.273656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.975 [2024-07-13 22:20:13.273688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.975 [2024-07-13 22:20:13.273710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.975 [2024-07-13 22:20:13.277864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.975 [2024-07-13 22:20:13.287189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.975 [2024-07-13 22:20:13.287705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-07-13 22:20:13.287745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.975 [2024-07-13 22:20:13.287771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.975 [2024-07-13 22:20:13.288065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.975 [2024-07-13 22:20:13.288355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.975 [2024-07-13 22:20:13.288386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.975 [2024-07-13 22:20:13.288408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.975 [2024-07-13 22:20:13.292573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.975 [2024-07-13 22:20:13.301631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.975 [2024-07-13 22:20:13.302162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-07-13 22:20:13.302202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.975 [2024-07-13 22:20:13.302227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.975 [2024-07-13 22:20:13.302513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.975 [2024-07-13 22:20:13.302802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.975 [2024-07-13 22:20:13.302832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.975 [2024-07-13 22:20:13.302854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.975 [2024-07-13 22:20:13.307031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.975 [2024-07-13 22:20:13.316095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.975 [2024-07-13 22:20:13.316595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-07-13 22:20:13.316635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.975 [2024-07-13 22:20:13.316660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.975 [2024-07-13 22:20:13.316959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.975 [2024-07-13 22:20:13.317255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.975 [2024-07-13 22:20:13.317285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.975 [2024-07-13 22:20:13.317308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.975 [2024-07-13 22:20:13.321464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.975 [2024-07-13 22:20:13.330779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.975 [2024-07-13 22:20:13.331327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-07-13 22:20:13.331369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.975 [2024-07-13 22:20:13.331394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.975 [2024-07-13 22:20:13.331684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.975 [2024-07-13 22:20:13.331985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.975 [2024-07-13 22:20:13.332016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.975 [2024-07-13 22:20:13.332038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.975 [2024-07-13 22:20:13.336212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.975 [2024-07-13 22:20:13.344715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.975 [2024-07-13 22:20:13.345208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-07-13 22:20:13.345245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.975 [2024-07-13 22:20:13.345268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.975 [2024-07-13 22:20:13.345552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.975 [2024-07-13 22:20:13.345799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.975 [2024-07-13 22:20:13.345825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.975 [2024-07-13 22:20:13.345843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.975 [2024-07-13 22:20:13.349489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:53.975 [2024-07-13 22:20:13.358674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:53.975 [2024-07-13 22:20:13.359161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:53.975 [2024-07-13 22:20:13.359196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:53.975 [2024-07-13 22:20:13.359235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:53.975 [2024-07-13 22:20:13.359529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:53.975 [2024-07-13 22:20:13.359769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:53.975 [2024-07-13 22:20:13.359794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:53.975 [2024-07-13 22:20:13.359812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:53.975 [2024-07-13 22:20:13.363549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.235 [2024-07-13 22:20:13.373122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.235 [2024-07-13 22:20:13.373649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.235 [2024-07-13 22:20:13.373687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.235 [2024-07-13 22:20:13.373710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.235 [2024-07-13 22:20:13.374013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.235 [2024-07-13 22:20:13.374301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.235 [2024-07-13 22:20:13.374327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.235 [2024-07-13 22:20:13.374354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.235 [2024-07-13 22:20:13.378036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.235 [2024-07-13 22:20:13.387127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.235 [2024-07-13 22:20:13.387682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.235 [2024-07-13 22:20:13.387719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.235 [2024-07-13 22:20:13.387743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.235 [2024-07-13 22:20:13.388050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.235 [2024-07-13 22:20:13.388327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.235 [2024-07-13 22:20:13.388355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.235 [2024-07-13 22:20:13.388374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.235 [2024-07-13 22:20:13.392066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.235 [2024-07-13 22:20:13.401108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.235 [2024-07-13 22:20:13.401602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.235 [2024-07-13 22:20:13.401653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.235 [2024-07-13 22:20:13.401677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.235 [2024-07-13 22:20:13.401998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.235 [2024-07-13 22:20:13.402265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.235 [2024-07-13 22:20:13.402306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.235 [2024-07-13 22:20:13.402325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.235 [2024-07-13 22:20:13.405968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.235 [2024-07-13 22:20:13.414912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.235 [2024-07-13 22:20:13.415375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.235 [2024-07-13 22:20:13.415426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.235 [2024-07-13 22:20:13.415455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.235 [2024-07-13 22:20:13.415746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.235 [2024-07-13 22:20:13.416016] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.235 [2024-07-13 22:20:13.416043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.235 [2024-07-13 22:20:13.416062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.235 [2024-07-13 22:20:13.419599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.235 [2024-07-13 22:20:13.428629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.235 [2024-07-13 22:20:13.429176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.235 [2024-07-13 22:20:13.429226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.235 [2024-07-13 22:20:13.429249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.235 [2024-07-13 22:20:13.429526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.235 [2024-07-13 22:20:13.429765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.235 [2024-07-13 22:20:13.429790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.235 [2024-07-13 22:20:13.429808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.235 [2024-07-13 22:20:13.433333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.235 [2024-07-13 22:20:13.442478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.235 [2024-07-13 22:20:13.442951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.235 [2024-07-13 22:20:13.442987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.235 [2024-07-13 22:20:13.443011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.235 [2024-07-13 22:20:13.443305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.235 [2024-07-13 22:20:13.443543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.235 [2024-07-13 22:20:13.443568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.235 [2024-07-13 22:20:13.443586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.235 [2024-07-13 22:20:13.447079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.235 [2024-07-13 22:20:13.456380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.235 [2024-07-13 22:20:13.456888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.236 [2024-07-13 22:20:13.456933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.236 [2024-07-13 22:20:13.456956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.236 [2024-07-13 22:20:13.457253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.236 [2024-07-13 22:20:13.457495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.236 [2024-07-13 22:20:13.457521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.236 [2024-07-13 22:20:13.457538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.236 [2024-07-13 22:20:13.461143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.236 [2024-07-13 22:20:13.470452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.236 [2024-07-13 22:20:13.470957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.236 [2024-07-13 22:20:13.470994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.236 [2024-07-13 22:20:13.471017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.236 [2024-07-13 22:20:13.471303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.236 [2024-07-13 22:20:13.471599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.236 [2024-07-13 22:20:13.471630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.236 [2024-07-13 22:20:13.471650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.236 [2024-07-13 22:20:13.476933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.236 [2024-07-13 22:20:13.484403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.236 [2024-07-13 22:20:13.484855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.236 [2024-07-13 22:20:13.484906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.236 [2024-07-13 22:20:13.484931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.236 [2024-07-13 22:20:13.485197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.236 [2024-07-13 22:20:13.485452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.236 [2024-07-13 22:20:13.485478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.236 [2024-07-13 22:20:13.485496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.236 [2024-07-13 22:20:13.489070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.236 [2024-07-13 22:20:13.498308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.236 [2024-07-13 22:20:13.498807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.236 [2024-07-13 22:20:13.498844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.236 [2024-07-13 22:20:13.498876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.236 [2024-07-13 22:20:13.499163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.236 [2024-07-13 22:20:13.499417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.236 [2024-07-13 22:20:13.499442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.236 [2024-07-13 22:20:13.499460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.236 [2024-07-13 22:20:13.502932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.236 [2024-07-13 22:20:13.511992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.236 [2024-07-13 22:20:13.512486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.236 [2024-07-13 22:20:13.512537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.236 [2024-07-13 22:20:13.512561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.236 [2024-07-13 22:20:13.512859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.236 [2024-07-13 22:20:13.513130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.236 [2024-07-13 22:20:13.513156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.236 [2024-07-13 22:20:13.513175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.236 [2024-07-13 22:20:13.516640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.236 [2024-07-13 22:20:13.525804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.236 [2024-07-13 22:20:13.526502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.236 [2024-07-13 22:20:13.526566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.236 [2024-07-13 22:20:13.526606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.236 [2024-07-13 22:20:13.526910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.236 [2024-07-13 22:20:13.527159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.236 [2024-07-13 22:20:13.527201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.236 [2024-07-13 22:20:13.527220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.236 [2024-07-13 22:20:13.530662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.236 [2024-07-13 22:20:13.539629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.236 [2024-07-13 22:20:13.540190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.236 [2024-07-13 22:20:13.540227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.236 [2024-07-13 22:20:13.540266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.236 [2024-07-13 22:20:13.540545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.236 [2024-07-13 22:20:13.540783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.236 [2024-07-13 22:20:13.540809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.236 [2024-07-13 22:20:13.540827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.236 [2024-07-13 22:20:13.544328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.236 [2024-07-13 22:20:13.553504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.236 [2024-07-13 22:20:13.554054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.236 [2024-07-13 22:20:13.554096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.236 [2024-07-13 22:20:13.554120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.236 [2024-07-13 22:20:13.554415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.236 [2024-07-13 22:20:13.554653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.236 [2024-07-13 22:20:13.554679] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.236 [2024-07-13 22:20:13.554697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.236 [2024-07-13 22:20:13.558182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.236 [2024-07-13 22:20:13.567226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.236 [2024-07-13 22:20:13.567694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.236 [2024-07-13 22:20:13.567729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.236 [2024-07-13 22:20:13.567752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.236 [2024-07-13 22:20:13.568023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.236 [2024-07-13 22:20:13.568281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.236 [2024-07-13 22:20:13.568306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.236 [2024-07-13 22:20:13.568324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.236 [2024-07-13 22:20:13.571766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.236 [2024-07-13 22:20:13.580970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.236 [2024-07-13 22:20:13.581441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.236 [2024-07-13 22:20:13.581477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.236 [2024-07-13 22:20:13.581500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.236 [2024-07-13 22:20:13.581794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.236 [2024-07-13 22:20:13.582063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.236 [2024-07-13 22:20:13.582091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.236 [2024-07-13 22:20:13.582109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.236 [2024-07-13 22:20:13.585559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.236 [2024-07-13 22:20:13.594695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.236 [2024-07-13 22:20:13.595207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.236 [2024-07-13 22:20:13.595257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.236 [2024-07-13 22:20:13.595279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.236 [2024-07-13 22:20:13.595554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.236 [2024-07-13 22:20:13.595796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.236 [2024-07-13 22:20:13.595822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.236 [2024-07-13 22:20:13.595840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.236 [2024-07-13 22:20:13.599392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.237 [2024-07-13 22:20:13.608564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.237 [2024-07-13 22:20:13.609078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.237 [2024-07-13 22:20:13.609115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.237 [2024-07-13 22:20:13.609138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.237 [2024-07-13 22:20:13.609418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.237 [2024-07-13 22:20:13.609657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.237 [2024-07-13 22:20:13.609682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.237 [2024-07-13 22:20:13.609700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.237 [2024-07-13 22:20:13.613224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.237 [2024-07-13 22:20:13.622539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.237 [2024-07-13 22:20:13.623049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.237 [2024-07-13 22:20:13.623087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.237 [2024-07-13 22:20:13.623111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.237 [2024-07-13 22:20:13.623404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.237 [2024-07-13 22:20:13.623663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.237 [2024-07-13 22:20:13.623691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.237 [2024-07-13 22:20:13.623724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.497 [2024-07-13 22:20:13.627480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.497 [2024-07-13 22:20:13.636421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.498 [2024-07-13 22:20:13.636974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.498 [2024-07-13 22:20:13.637013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.498 [2024-07-13 22:20:13.637037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.498 [2024-07-13 22:20:13.637314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.498 [2024-07-13 22:20:13.637551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.498 [2024-07-13 22:20:13.637576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.498 [2024-07-13 22:20:13.637594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.498 [2024-07-13 22:20:13.641132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.498 [2024-07-13 22:20:13.650337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.498 [2024-07-13 22:20:13.650862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.498 [2024-07-13 22:20:13.650907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.498 [2024-07-13 22:20:13.650940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.498 [2024-07-13 22:20:13.651224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.498 [2024-07-13 22:20:13.651462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.498 [2024-07-13 22:20:13.651487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.498 [2024-07-13 22:20:13.651505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.498 [2024-07-13 22:20:13.654974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.498 [2024-07-13 22:20:13.664152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.498 [2024-07-13 22:20:13.664667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.498 [2024-07-13 22:20:13.664719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.498 [2024-07-13 22:20:13.664742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.498 [2024-07-13 22:20:13.665036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.498 [2024-07-13 22:20:13.665292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.498 [2024-07-13 22:20:13.665319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.498 [2024-07-13 22:20:13.665337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.498 [2024-07-13 22:20:13.668827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.498 [2024-07-13 22:20:13.678031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.498 [2024-07-13 22:20:13.678497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.498 [2024-07-13 22:20:13.678546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.498 [2024-07-13 22:20:13.678570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.498 [2024-07-13 22:20:13.678844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.498 [2024-07-13 22:20:13.679111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.498 [2024-07-13 22:20:13.679138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.498 [2024-07-13 22:20:13.679156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.498 [2024-07-13 22:20:13.682615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.498 [2024-07-13 22:20:13.691783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.498 [2024-07-13 22:20:13.692242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.498 [2024-07-13 22:20:13.692297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.498 [2024-07-13 22:20:13.692321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.498 [2024-07-13 22:20:13.692599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.498 [2024-07-13 22:20:13.692837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.498 [2024-07-13 22:20:13.692888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.498 [2024-07-13 22:20:13.692908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.498 [2024-07-13 22:20:13.696405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.498 [2024-07-13 22:20:13.705554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.498 [2024-07-13 22:20:13.706100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.498 [2024-07-13 22:20:13.706137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.498 [2024-07-13 22:20:13.706161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.498 [2024-07-13 22:20:13.706453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.498 [2024-07-13 22:20:13.706690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.498 [2024-07-13 22:20:13.706715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.498 [2024-07-13 22:20:13.706733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.498 [2024-07-13 22:20:13.710241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.498 [2024-07-13 22:20:13.719457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.498 [2024-07-13 22:20:13.719961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.498 [2024-07-13 22:20:13.720013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.498 [2024-07-13 22:20:13.720036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.498 [2024-07-13 22:20:13.720330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.498 [2024-07-13 22:20:13.720579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.498 [2024-07-13 22:20:13.720604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.498 [2024-07-13 22:20:13.720622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.498 [2024-07-13 22:20:13.725805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.498 [2024-07-13 22:20:13.733411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.498 [2024-07-13 22:20:13.733964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.498 [2024-07-13 22:20:13.734017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.498 [2024-07-13 22:20:13.734042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.498 [2024-07-13 22:20:13.734340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.498 [2024-07-13 22:20:13.734582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.498 [2024-07-13 22:20:13.734608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.498 [2024-07-13 22:20:13.734626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.498 [2024-07-13 22:20:13.738278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.498 [2024-07-13 22:20:13.747251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.498 [2024-07-13 22:20:13.747684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.498 [2024-07-13 22:20:13.747734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.498 [2024-07-13 22:20:13.747771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.498 [2024-07-13 22:20:13.748046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.498 [2024-07-13 22:20:13.748303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.498 [2024-07-13 22:20:13.748329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.498 [2024-07-13 22:20:13.748347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.498 [2024-07-13 22:20:13.751782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.498 [2024-07-13 22:20:13.760986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.498 [2024-07-13 22:20:13.761488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.498 [2024-07-13 22:20:13.761524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.498 [2024-07-13 22:20:13.761547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.498 [2024-07-13 22:20:13.761842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.498 [2024-07-13 22:20:13.762109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.498 [2024-07-13 22:20:13.762136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.498 [2024-07-13 22:20:13.762154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.498 [2024-07-13 22:20:13.765616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.498 [2024-07-13 22:20:13.774795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.498 [2024-07-13 22:20:13.775301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.498 [2024-07-13 22:20:13.775352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.498 [2024-07-13 22:20:13.775376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.498 [2024-07-13 22:20:13.775654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.498 [2024-07-13 22:20:13.775917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.498 [2024-07-13 22:20:13.775944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.499 [2024-07-13 22:20:13.775967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.499 [2024-07-13 22:20:13.779431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.499 [2024-07-13 22:20:13.788577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.499 [2024-07-13 22:20:13.789078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.499 [2024-07-13 22:20:13.789114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.499 [2024-07-13 22:20:13.789137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.499 [2024-07-13 22:20:13.789428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.499 [2024-07-13 22:20:13.789665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.499 [2024-07-13 22:20:13.789690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.499 [2024-07-13 22:20:13.789708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.499 [2024-07-13 22:20:13.793191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.499 [2024-07-13 22:20:13.802305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.499 [2024-07-13 22:20:13.802733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.499 [2024-07-13 22:20:13.802768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.499 [2024-07-13 22:20:13.802790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.499 [2024-07-13 22:20:13.803098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.499 [2024-07-13 22:20:13.803353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.499 [2024-07-13 22:20:13.803379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.499 [2024-07-13 22:20:13.803396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.499 [2024-07-13 22:20:13.806829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.499 [2024-07-13 22:20:13.815989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.499 [2024-07-13 22:20:13.816411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.499 [2024-07-13 22:20:13.816462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.499 [2024-07-13 22:20:13.816485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.499 [2024-07-13 22:20:13.816764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.499 [2024-07-13 22:20:13.817033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.499 [2024-07-13 22:20:13.817060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.499 [2024-07-13 22:20:13.817078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.499 [2024-07-13 22:20:13.820535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.499 [2024-07-13 22:20:13.829687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.499 [2024-07-13 22:20:13.830188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.499 [2024-07-13 22:20:13.830233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.499 [2024-07-13 22:20:13.830257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.499 [2024-07-13 22:20:13.830547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.499 [2024-07-13 22:20:13.830785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.499 [2024-07-13 22:20:13.830810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.499 [2024-07-13 22:20:13.830828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.499 [2024-07-13 22:20:13.834307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.499 [2024-07-13 22:20:13.843463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.499 [2024-07-13 22:20:13.843948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.499 [2024-07-13 22:20:13.843986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.499 [2024-07-13 22:20:13.844009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.499 [2024-07-13 22:20:13.844301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.499 [2024-07-13 22:20:13.844538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.499 [2024-07-13 22:20:13.844564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.499 [2024-07-13 22:20:13.844582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.499 [2024-07-13 22:20:13.848051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.499 [2024-07-13 22:20:13.857165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.499 [2024-07-13 22:20:13.857658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.499 [2024-07-13 22:20:13.857709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.499 [2024-07-13 22:20:13.857733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.499 [2024-07-13 22:20:13.858052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.499 [2024-07-13 22:20:13.858309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.499 [2024-07-13 22:20:13.858334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.499 [2024-07-13 22:20:13.858353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.499 [2024-07-13 22:20:13.861830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.499 [2024-07-13 22:20:13.870996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.499 [2024-07-13 22:20:13.871499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.499 [2024-07-13 22:20:13.871536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.499 [2024-07-13 22:20:13.871559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.499 [2024-07-13 22:20:13.871857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.499 [2024-07-13 22:20:13.872126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.499 [2024-07-13 22:20:13.872153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.499 [2024-07-13 22:20:13.872171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.499 [2024-07-13 22:20:13.875625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.499 [2024-07-13 22:20:13.884835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.499 [2024-07-13 22:20:13.885352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.499 [2024-07-13 22:20:13.885388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.499 [2024-07-13 22:20:13.885411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.499 [2024-07-13 22:20:13.885682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.499 [2024-07-13 22:20:13.885950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.499 [2024-07-13 22:20:13.885993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.499 [2024-07-13 22:20:13.886012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.499 [2024-07-13 22:20:13.889762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.761 [2024-07-13 22:20:13.898781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.761 [2024-07-13 22:20:13.899289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.761 [2024-07-13 22:20:13.899327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.761 [2024-07-13 22:20:13.899350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.761 [2024-07-13 22:20:13.899642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.761 [2024-07-13 22:20:13.899903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.761 [2024-07-13 22:20:13.899931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.761 [2024-07-13 22:20:13.899949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.761 [2024-07-13 22:20:13.903405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.761 [2024-07-13 22:20:13.912552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.761 [2024-07-13 22:20:13.913037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.761 [2024-07-13 22:20:13.913073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.761 [2024-07-13 22:20:13.913097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.761 [2024-07-13 22:20:13.913387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.761 [2024-07-13 22:20:13.913624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.761 [2024-07-13 22:20:13.913649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.761 [2024-07-13 22:20:13.913685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.761 [2024-07-13 22:20:13.917157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.761 [2024-07-13 22:20:13.926403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.761 [2024-07-13 22:20:13.926868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.761 [2024-07-13 22:20:13.926926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.761 [2024-07-13 22:20:13.926950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.761 [2024-07-13 22:20:13.927222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.761 [2024-07-13 22:20:13.927459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.761 [2024-07-13 22:20:13.927485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.761 [2024-07-13 22:20:13.927503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.761 [2024-07-13 22:20:13.930893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.761 [2024-07-13 22:20:13.940221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.761 [2024-07-13 22:20:13.940842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.761 [2024-07-13 22:20:13.940884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.761 [2024-07-13 22:20:13.940909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.761 [2024-07-13 22:20:13.941201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.761 [2024-07-13 22:20:13.941437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.761 [2024-07-13 22:20:13.941463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.761 [2024-07-13 22:20:13.941481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.761 [2024-07-13 22:20:13.944966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.761 [2024-07-13 22:20:13.954055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.761 [2024-07-13 22:20:13.954646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.761 [2024-07-13 22:20:13.954682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.761 [2024-07-13 22:20:13.954705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.761 [2024-07-13 22:20:13.955009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.761 [2024-07-13 22:20:13.955266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.761 [2024-07-13 22:20:13.955292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.761 [2024-07-13 22:20:13.955309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.761 [2024-07-13 22:20:13.958737] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.761 [2024-07-13 22:20:13.967937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.761 [2024-07-13 22:20:13.968418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.761 [2024-07-13 22:20:13.968469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.761 [2024-07-13 22:20:13.968492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.761 [2024-07-13 22:20:13.968787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.761 [2024-07-13 22:20:13.969056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.761 [2024-07-13 22:20:13.969084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.761 [2024-07-13 22:20:13.969102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.761 [2024-07-13 22:20:13.972557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.761 [2024-07-13 22:20:13.982325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.761 [2024-07-13 22:20:13.982901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.761 [2024-07-13 22:20:13.982954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.761 [2024-07-13 22:20:13.982979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.761 [2024-07-13 22:20:13.983267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.761 [2024-07-13 22:20:13.983505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.761 [2024-07-13 22:20:13.983530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.761 [2024-07-13 22:20:13.983548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.761 [2024-07-13 22:20:13.987141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.761 [2024-07-13 22:20:13.996089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.761 [2024-07-13 22:20:13.996766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.761 [2024-07-13 22:20:13.996828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.761 [2024-07-13 22:20:13.996853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.761 [2024-07-13 22:20:13.997147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.761 [2024-07-13 22:20:13.997404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.761 [2024-07-13 22:20:13.997431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.761 [2024-07-13 22:20:13.997449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.761 [2024-07-13 22:20:14.000915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.761 [2024-07-13 22:20:14.009915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.761 [2024-07-13 22:20:14.010371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.761 [2024-07-13 22:20:14.010422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.761 [2024-07-13 22:20:14.010445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.761 [2024-07-13 22:20:14.010725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.761 [2024-07-13 22:20:14.010993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.761 [2024-07-13 22:20:14.011020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.761 [2024-07-13 22:20:14.011039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.761 [2024-07-13 22:20:14.014490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.761 [2024-07-13 22:20:14.023643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.761 [2024-07-13 22:20:14.024134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.761 [2024-07-13 22:20:14.024172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.761 [2024-07-13 22:20:14.024195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.761 [2024-07-13 22:20:14.024487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.761 [2024-07-13 22:20:14.024725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.761 [2024-07-13 22:20:14.024750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.761 [2024-07-13 22:20:14.024768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.761 [2024-07-13 22:20:14.028289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.761 [2024-07-13 22:20:14.037477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.761 [2024-07-13 22:20:14.038028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.762 [2024-07-13 22:20:14.038066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.762 [2024-07-13 22:20:14.038089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.762 [2024-07-13 22:20:14.038382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.762 [2024-07-13 22:20:14.038620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.762 [2024-07-13 22:20:14.038645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.762 [2024-07-13 22:20:14.038664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.762 [2024-07-13 22:20:14.042119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.762 [2024-07-13 22:20:14.051245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.762 [2024-07-13 22:20:14.051841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.762 [2024-07-13 22:20:14.051899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.762 [2024-07-13 22:20:14.051923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.762 [2024-07-13 22:20:14.052217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.762 [2024-07-13 22:20:14.052455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.762 [2024-07-13 22:20:14.052480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.762 [2024-07-13 22:20:14.052505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.762 [2024-07-13 22:20:14.055989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.762 [2024-07-13 22:20:14.064951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.762 [2024-07-13 22:20:14.065483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.762 [2024-07-13 22:20:14.065536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.762 [2024-07-13 22:20:14.065560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.762 [2024-07-13 22:20:14.065831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.762 [2024-07-13 22:20:14.066099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.762 [2024-07-13 22:20:14.066126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.762 [2024-07-13 22:20:14.066144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.762 [2024-07-13 22:20:14.069639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.762 [2024-07-13 22:20:14.078761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.762 [2024-07-13 22:20:14.079263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.762 [2024-07-13 22:20:14.079314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.762 [2024-07-13 22:20:14.079338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.762 [2024-07-13 22:20:14.079628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.762 [2024-07-13 22:20:14.079873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.762 [2024-07-13 22:20:14.079915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.762 [2024-07-13 22:20:14.079933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.762 [2024-07-13 22:20:14.083398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.762 [2024-07-13 22:20:14.092567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.762 [2024-07-13 22:20:14.093058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.762 [2024-07-13 22:20:14.093094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.762 [2024-07-13 22:20:14.093116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.762 [2024-07-13 22:20:14.093390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.762 [2024-07-13 22:20:14.093628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.762 [2024-07-13 22:20:14.093653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.762 [2024-07-13 22:20:14.093671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.762 [2024-07-13 22:20:14.097141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.762 [2024-07-13 22:20:14.106314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.762 [2024-07-13 22:20:14.106755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.762 [2024-07-13 22:20:14.106789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.762 [2024-07-13 22:20:14.106810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.762 [2024-07-13 22:20:14.107094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.762 [2024-07-13 22:20:14.107349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.762 [2024-07-13 22:20:14.107375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.762 [2024-07-13 22:20:14.107393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.762 [2024-07-13 22:20:14.110826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.762 [2024-07-13 22:20:14.120245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.762 [2024-07-13 22:20:14.120802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.762 [2024-07-13 22:20:14.120853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.762 [2024-07-13 22:20:14.120891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.762 [2024-07-13 22:20:14.121191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.762 [2024-07-13 22:20:14.121428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.762 [2024-07-13 22:20:14.121453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.762 [2024-07-13 22:20:14.121471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.762 [2024-07-13 22:20:14.124928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.762 [2024-07-13 22:20:14.134115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.762 [2024-07-13 22:20:14.134576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.762 [2024-07-13 22:20:14.134627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.762 [2024-07-13 22:20:14.134651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.762 [2024-07-13 22:20:14.134960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.762 [2024-07-13 22:20:14.135240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.762 [2024-07-13 22:20:14.135266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.762 [2024-07-13 22:20:14.135284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.762 [2024-07-13 22:20:14.138712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:54.762 [2024-07-13 22:20:14.147929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:54.762 [2024-07-13 22:20:14.148419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:54.762 [2024-07-13 22:20:14.148469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:54.762 [2024-07-13 22:20:14.148493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:54.762 [2024-07-13 22:20:14.148787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:54.762 [2024-07-13 22:20:14.149074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:54.762 [2024-07-13 22:20:14.149103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:54.762 [2024-07-13 22:20:14.149122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:54.762 [2024-07-13 22:20:14.152816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.022 [2024-07-13 22:20:14.161872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.022 [2024-07-13 22:20:14.162371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.022 [2024-07-13 22:20:14.162408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.022 [2024-07-13 22:20:14.162432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.022 [2024-07-13 22:20:14.162727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.022 [2024-07-13 22:20:14.162992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.022 [2024-07-13 22:20:14.163020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.022 [2024-07-13 22:20:14.163039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.022 [2024-07-13 22:20:14.166501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.022 [2024-07-13 22:20:14.175683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.022 [2024-07-13 22:20:14.176171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.022 [2024-07-13 22:20:14.176208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.022 [2024-07-13 22:20:14.176231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.022 [2024-07-13 22:20:14.176525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.022 [2024-07-13 22:20:14.176762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.022 [2024-07-13 22:20:14.176787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.022 [2024-07-13 22:20:14.176805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.022 [2024-07-13 22:20:14.180269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.022 [2024-07-13 22:20:14.189405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.022 [2024-07-13 22:20:14.189901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.022 [2024-07-13 22:20:14.189953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.022 [2024-07-13 22:20:14.189976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.022 [2024-07-13 22:20:14.190265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.022 [2024-07-13 22:20:14.190502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.022 [2024-07-13 22:20:14.190527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.022 [2024-07-13 22:20:14.190551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.022 [2024-07-13 22:20:14.194201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.022 [2024-07-13 22:20:14.203307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.022 [2024-07-13 22:20:14.203795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.022 [2024-07-13 22:20:14.203831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.022 [2024-07-13 22:20:14.203876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.022 [2024-07-13 22:20:14.204164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.022 [2024-07-13 22:20:14.204419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.022 [2024-07-13 22:20:14.204445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.022 [2024-07-13 22:20:14.204463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.022 [2024-07-13 22:20:14.208067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.022 [2024-07-13 22:20:14.217221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.022 [2024-07-13 22:20:14.217706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.022 [2024-07-13 22:20:14.217742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.022 [2024-07-13 22:20:14.217765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.022 [2024-07-13 22:20:14.218056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.022 [2024-07-13 22:20:14.218330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.022 [2024-07-13 22:20:14.218356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.022 [2024-07-13 22:20:14.218375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.022 [2024-07-13 22:20:14.221900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.022 [2024-07-13 22:20:14.231002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.022 [2024-07-13 22:20:14.231558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.022 [2024-07-13 22:20:14.231611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.022 [2024-07-13 22:20:14.231652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.022 [2024-07-13 22:20:14.232053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.022 [2024-07-13 22:20:14.232478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.022 [2024-07-13 22:20:14.232519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.022 [2024-07-13 22:20:14.232568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.022 [2024-07-13 22:20:14.237401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.022 [2024-07-13 22:20:14.244953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.022 [2024-07-13 22:20:14.245455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.022 [2024-07-13 22:20:14.245508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.022 [2024-07-13 22:20:14.245532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.022 [2024-07-13 22:20:14.245824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.022 [2024-07-13 22:20:14.246093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.022 [2024-07-13 22:20:14.246120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.022 [2024-07-13 22:20:14.246153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.022 [2024-07-13 22:20:14.250271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.022 [2024-07-13 22:20:14.259595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.022 [2024-07-13 22:20:14.260141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.022 [2024-07-13 22:20:14.260181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.022 [2024-07-13 22:20:14.260207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.022 [2024-07-13 22:20:14.260494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.022 [2024-07-13 22:20:14.260783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.022 [2024-07-13 22:20:14.260814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.022 [2024-07-13 22:20:14.260836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.022 [2024-07-13 22:20:14.265041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.022 [2024-07-13 22:20:14.274120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.022 [2024-07-13 22:20:14.274824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.022 [2024-07-13 22:20:14.274903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.022 [2024-07-13 22:20:14.274940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.022 [2024-07-13 22:20:14.275226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.022 [2024-07-13 22:20:14.275516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.022 [2024-07-13 22:20:14.275547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.022 [2024-07-13 22:20:14.275569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.022 [2024-07-13 22:20:14.279733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.022 [2024-07-13 22:20:14.288788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.022 [2024-07-13 22:20:14.289308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.022 [2024-07-13 22:20:14.289349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.022 [2024-07-13 22:20:14.289374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.022 [2024-07-13 22:20:14.289667] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.022 [2024-07-13 22:20:14.289971] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.022 [2024-07-13 22:20:14.290003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.022 [2024-07-13 22:20:14.290025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.022 [2024-07-13 22:20:14.294184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.022 [2024-07-13 22:20:14.303242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.022 [2024-07-13 22:20:14.303734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.022 [2024-07-13 22:20:14.303774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.023 [2024-07-13 22:20:14.303800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.023 [2024-07-13 22:20:14.304100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.023 [2024-07-13 22:20:14.304390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.023 [2024-07-13 22:20:14.304421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.023 [2024-07-13 22:20:14.304442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.023 [2024-07-13 22:20:14.308591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.023 [2024-07-13 22:20:14.317876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.023 [2024-07-13 22:20:14.318421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.023 [2024-07-13 22:20:14.318472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.023 [2024-07-13 22:20:14.318498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.023 [2024-07-13 22:20:14.318785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.023 [2024-07-13 22:20:14.319088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.023 [2024-07-13 22:20:14.319120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.023 [2024-07-13 22:20:14.319142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.023 [2024-07-13 22:20:14.323300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.023 [2024-07-13 22:20:14.332354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.023 [2024-07-13 22:20:14.332850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.023 [2024-07-13 22:20:14.332900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.023 [2024-07-13 22:20:14.332926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.023 [2024-07-13 22:20:14.333214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.023 [2024-07-13 22:20:14.333502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.023 [2024-07-13 22:20:14.333533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.023 [2024-07-13 22:20:14.333561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.023 [2024-07-13 22:20:14.337721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.023 [2024-07-13 22:20:14.347021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.023 [2024-07-13 22:20:14.347570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.023 [2024-07-13 22:20:14.347610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.023 [2024-07-13 22:20:14.347635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.023 [2024-07-13 22:20:14.347935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.023 [2024-07-13 22:20:14.348223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.023 [2024-07-13 22:20:14.348254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.023 [2024-07-13 22:20:14.348277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.023 [2024-07-13 22:20:14.352424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.023 [2024-07-13 22:20:14.361475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.023 [2024-07-13 22:20:14.361967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.023 [2024-07-13 22:20:14.362008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.023 [2024-07-13 22:20:14.362033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.023 [2024-07-13 22:20:14.362320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.023 [2024-07-13 22:20:14.362608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.023 [2024-07-13 22:20:14.362639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.023 [2024-07-13 22:20:14.362661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.023 [2024-07-13 22:20:14.366805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.023 [2024-07-13 22:20:14.376068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.023 [2024-07-13 22:20:14.376562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.023 [2024-07-13 22:20:14.376603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.023 [2024-07-13 22:20:14.376628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.023 [2024-07-13 22:20:14.376929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.023 [2024-07-13 22:20:14.377226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.023 [2024-07-13 22:20:14.377256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.023 [2024-07-13 22:20:14.377278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.023 [2024-07-13 22:20:14.381429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.023 [2024-07-13 22:20:14.390705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.023 [2024-07-13 22:20:14.391258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.023 [2024-07-13 22:20:14.391300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.023 [2024-07-13 22:20:14.391325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.023 [2024-07-13 22:20:14.391618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.023 [2024-07-13 22:20:14.391925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.023 [2024-07-13 22:20:14.391957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.023 [2024-07-13 22:20:14.391980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.023 [2024-07-13 22:20:14.396137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.023 [2024-07-13 22:20:14.405210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.023 [2024-07-13 22:20:14.405700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.023 [2024-07-13 22:20:14.405741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.023 [2024-07-13 22:20:14.405766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.023 [2024-07-13 22:20:14.406074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.023 [2024-07-13 22:20:14.406364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.023 [2024-07-13 22:20:14.406395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.023 [2024-07-13 22:20:14.406417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.023 [2024-07-13 22:20:14.410593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 48732 Killed "${NVMF_APP[@]}" "$@" 00:36:55.023 22:20:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:36:55.023 22:20:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:55.023 22:20:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:55.023 22:20:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:55.282 22:20:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:55.282 22:20:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=49953 00:36:55.282 22:20:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:55.283 22:20:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 49953 00:36:55.283 22:20:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 49953 ']' 00:36:55.283 22:20:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:55.283 22:20:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:55.283 22:20:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:55.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:55.283 22:20:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:55.283 22:20:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:55.283 [2024-07-13 22:20:14.419766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.283 [2024-07-13 22:20:14.420312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.283 [2024-07-13 22:20:14.420355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.283 [2024-07-13 22:20:14.420382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.283 [2024-07-13 22:20:14.420670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.283 [2024-07-13 22:20:14.420981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.283 [2024-07-13 22:20:14.421013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.283 [2024-07-13 22:20:14.421035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.283 [2024-07-13 22:20:14.425191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.283 [2024-07-13 22:20:14.434251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.283 [2024-07-13 22:20:14.434787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.283 [2024-07-13 22:20:14.434828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.283 [2024-07-13 22:20:14.434854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.283 [2024-07-13 22:20:14.435152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.283 [2024-07-13 22:20:14.435441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.283 [2024-07-13 22:20:14.435471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.283 [2024-07-13 22:20:14.435493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.283 [2024-07-13 22:20:14.439653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.283 [2024-07-13 22:20:14.448645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.283 [2024-07-13 22:20:14.449118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.283 [2024-07-13 22:20:14.449156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.283 [2024-07-13 22:20:14.449179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.283 [2024-07-13 22:20:14.449482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.283 [2024-07-13 22:20:14.449724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.283 [2024-07-13 22:20:14.449750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.283 [2024-07-13 22:20:14.449768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.283 [2024-07-13 22:20:14.453471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.283 [2024-07-13 22:20:14.462703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.283 [2024-07-13 22:20:14.463310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.283 [2024-07-13 22:20:14.463350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.283 [2024-07-13 22:20:14.463376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.283 [2024-07-13 22:20:14.463678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.283 [2024-07-13 22:20:14.463984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.283 [2024-07-13 22:20:14.464015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.283 [2024-07-13 22:20:14.464037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.283 [2024-07-13 22:20:14.467939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.283 [2024-07-13 22:20:14.476933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.283 [2024-07-13 22:20:14.477462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.283 [2024-07-13 22:20:14.477499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.283 [2024-07-13 22:20:14.477522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.283 [2024-07-13 22:20:14.477812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.283 [2024-07-13 22:20:14.478103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.283 [2024-07-13 22:20:14.478133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.283 [2024-07-13 22:20:14.478168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.283 [2024-07-13 22:20:14.481937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.283 [2024-07-13 22:20:14.491736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.283 [2024-07-13 22:20:14.492250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.283 [2024-07-13 22:20:14.492289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.283 [2024-07-13 22:20:14.492314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.283 [2024-07-13 22:20:14.492603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.283 [2024-07-13 22:20:14.492881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.283 [2024-07-13 22:20:14.492910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.283 [2024-07-13 22:20:14.492931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.283 [2024-07-13 22:20:14.496679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.283 [2024-07-13 22:20:14.505925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.283 [2024-07-13 22:20:14.506548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.283 [2024-07-13 22:20:14.506590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.283 [2024-07-13 22:20:14.506614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.283 [2024-07-13 22:20:14.506943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.283 [2024-07-13 22:20:14.507224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.283 [2024-07-13 22:20:14.507266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.283 [2024-07-13 22:20:14.507292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.283 [2024-07-13 22:20:14.508749] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:55.283 [2024-07-13 22:20:14.508890] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:55.283 [2024-07-13 22:20:14.510972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.283 [2024-07-13 22:20:14.520028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.283 [2024-07-13 22:20:14.520500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.283 [2024-07-13 22:20:14.520551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.283 [2024-07-13 22:20:14.520574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.283 [2024-07-13 22:20:14.520924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.283 [2024-07-13 22:20:14.521190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.283 [2024-07-13 22:20:14.521216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.283 [2024-07-13 22:20:14.521235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.283 [2024-07-13 22:20:14.524877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.283 [2024-07-13 22:20:14.534008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.283 [2024-07-13 22:20:14.534519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.283 [2024-07-13 22:20:14.534570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.283 [2024-07-13 22:20:14.534596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.283 [2024-07-13 22:20:14.534923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.283 [2024-07-13 22:20:14.535206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.284 [2024-07-13 22:20:14.535236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.284 [2024-07-13 22:20:14.535271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.284 [2024-07-13 22:20:14.538882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.284 [2024-07-13 22:20:14.548257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.284 [2024-07-13 22:20:14.548778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.284 [2024-07-13 22:20:14.548816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.284 [2024-07-13 22:20:14.548840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.284 [2024-07-13 22:20:14.549125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.284 [2024-07-13 22:20:14.549410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.284 [2024-07-13 22:20:14.549437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.284 [2024-07-13 22:20:14.549461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.284 [2024-07-13 22:20:14.553199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.284 [2024-07-13 22:20:14.562415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.284 [2024-07-13 22:20:14.562990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.284 [2024-07-13 22:20:14.563027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.284 [2024-07-13 22:20:14.563051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.284 [2024-07-13 22:20:14.563361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.284 [2024-07-13 22:20:14.563603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.284 [2024-07-13 22:20:14.563628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.284 [2024-07-13 22:20:14.563646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.284 [2024-07-13 22:20:14.567374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.284 [2024-07-13 22:20:14.576340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.284 [2024-07-13 22:20:14.576878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.284 [2024-07-13 22:20:14.576914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.284 [2024-07-13 22:20:14.576936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.284 [2024-07-13 22:20:14.577212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.284 [2024-07-13 22:20:14.577454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.284 [2024-07-13 22:20:14.577479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.284 [2024-07-13 22:20:14.577498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.284 [2024-07-13 22:20:14.581049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.284 [2024-07-13 22:20:14.590211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.284 [2024-07-13 22:20:14.590910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.284 [2024-07-13 22:20:14.590960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.284 [2024-07-13 22:20:14.591000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.284 [2024-07-13 22:20:14.591282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.284 [2024-07-13 22:20:14.591526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.284 [2024-07-13 22:20:14.591553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.284 [2024-07-13 22:20:14.591571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.284 EAL: No free 2048 kB hugepages reported on node 1 00:36:55.284 [2024-07-13 22:20:14.595131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.284 [2024-07-13 22:20:14.604687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.284 [2024-07-13 22:20:14.605217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.284 [2024-07-13 22:20:14.605253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.284 [2024-07-13 22:20:14.605275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.284 [2024-07-13 22:20:14.605560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.284 [2024-07-13 22:20:14.605864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.284 [2024-07-13 22:20:14.605920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.284 [2024-07-13 22:20:14.605940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.284 [2024-07-13 22:20:14.610137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.284 [2024-07-13 22:20:14.619460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.284 [2024-07-13 22:20:14.620016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.284 [2024-07-13 22:20:14.620053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.284 [2024-07-13 22:20:14.620076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.284 [2024-07-13 22:20:14.620397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.284 [2024-07-13 22:20:14.620692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.284 [2024-07-13 22:20:14.620724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.284 [2024-07-13 22:20:14.620746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.284 [2024-07-13 22:20:14.625061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.284 [2024-07-13 22:20:14.633963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.284 [2024-07-13 22:20:14.634467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.284 [2024-07-13 22:20:14.634504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.284 [2024-07-13 22:20:14.634527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.284 [2024-07-13 22:20:14.634841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.284 [2024-07-13 22:20:14.635120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.284 [2024-07-13 22:20:14.635146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.284 [2024-07-13 22:20:14.635170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.284 [2024-07-13 22:20:14.639384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.284 [2024-07-13 22:20:14.648448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.284 [2024-07-13 22:20:14.648957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.284 [2024-07-13 22:20:14.648995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.284 [2024-07-13 22:20:14.649019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.284 [2024-07-13 22:20:14.649339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.284 [2024-07-13 22:20:14.649633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.284 [2024-07-13 22:20:14.649664] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.284 [2024-07-13 22:20:14.649687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.284 [2024-07-13 22:20:14.653970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.284 [2024-07-13 22:20:14.661300] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:55.284 [2024-07-13 22:20:14.663260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.284 [2024-07-13 22:20:14.663785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.284 [2024-07-13 22:20:14.663822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.284 [2024-07-13 22:20:14.663844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.284 [2024-07-13 22:20:14.664130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.284 [2024-07-13 22:20:14.664439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.284 [2024-07-13 22:20:14.664471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.284 [2024-07-13 22:20:14.664494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.284 [2024-07-13 22:20:14.668797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.545 [2024-07-13 22:20:14.678177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.545 [2024-07-13 22:20:14.678846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.545 [2024-07-13 22:20:14.678912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.545 [2024-07-13 22:20:14.678941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.545 [2024-07-13 22:20:14.679246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.545 [2024-07-13 22:20:14.679548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.545 [2024-07-13 22:20:14.679594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.545 [2024-07-13 22:20:14.679631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.545 [2024-07-13 22:20:14.684279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.545 [2024-07-13 22:20:14.692956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.545 [2024-07-13 22:20:14.693479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.545 [2024-07-13 22:20:14.693521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.545 [2024-07-13 22:20:14.693548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.545 [2024-07-13 22:20:14.693842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.545 [2024-07-13 22:20:14.694136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.545 [2024-07-13 22:20:14.694188] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.545 [2024-07-13 22:20:14.694207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.546 [2024-07-13 22:20:14.698513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.546 [2024-07-13 22:20:14.707601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.546 [2024-07-13 22:20:14.708129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.546 [2024-07-13 22:20:14.708174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.546 [2024-07-13 22:20:14.708198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.546 [2024-07-13 22:20:14.708514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.546 [2024-07-13 22:20:14.708810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.546 [2024-07-13 22:20:14.708840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.546 [2024-07-13 22:20:14.708862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.546 [2024-07-13 22:20:14.713162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.546 [2024-07-13 22:20:14.722164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.546 [2024-07-13 22:20:14.722708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.546 [2024-07-13 22:20:14.722748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.546 [2024-07-13 22:20:14.722773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.546 [2024-07-13 22:20:14.723090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.546 [2024-07-13 22:20:14.723398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.546 [2024-07-13 22:20:14.723444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.546 [2024-07-13 22:20:14.723466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.546 [2024-07-13 22:20:14.727675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.546 [2024-07-13 22:20:14.736668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.546 [2024-07-13 22:20:14.737215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.546 [2024-07-13 22:20:14.737268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.546 [2024-07-13 22:20:14.737306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.546 [2024-07-13 22:20:14.737735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.546 [2024-07-13 22:20:14.738134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.546 [2024-07-13 22:20:14.738185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.546 [2024-07-13 22:20:14.738231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.546 [2024-07-13 22:20:14.743226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.546 [2024-07-13 22:20:14.751259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.546 [2024-07-13 22:20:14.751786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.546 [2024-07-13 22:20:14.751825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.546 [2024-07-13 22:20:14.751848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.546 [2024-07-13 22:20:14.752123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.546 [2024-07-13 22:20:14.752430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.546 [2024-07-13 22:20:14.752461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.546 [2024-07-13 22:20:14.752483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.546 [2024-07-13 22:20:14.756707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.546 [2024-07-13 22:20:14.765819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.546 [2024-07-13 22:20:14.766311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.546 [2024-07-13 22:20:14.766352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.546 [2024-07-13 22:20:14.766378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.546 [2024-07-13 22:20:14.766670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.546 [2024-07-13 22:20:14.766985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.546 [2024-07-13 22:20:14.767012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.546 [2024-07-13 22:20:14.767032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.546 [2024-07-13 22:20:14.771181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.546 [2024-07-13 22:20:14.780434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.546 [2024-07-13 22:20:14.780986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.546 [2024-07-13 22:20:14.781024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.546 [2024-07-13 22:20:14.781048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.546 [2024-07-13 22:20:14.781356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.546 [2024-07-13 22:20:14.781650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.546 [2024-07-13 22:20:14.781681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.546 [2024-07-13 22:20:14.781703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.546 [2024-07-13 22:20:14.785857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.546 [2024-07-13 22:20:14.795008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.546 [2024-07-13 22:20:14.795498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.546 [2024-07-13 22:20:14.795539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.546 [2024-07-13 22:20:14.795572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.546 [2024-07-13 22:20:14.795874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.546 [2024-07-13 22:20:14.796147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.546 [2024-07-13 22:20:14.796191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.546 [2024-07-13 22:20:14.796214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.546 [2024-07-13 22:20:14.800431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.546 [2024-07-13 22:20:14.809611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.546 [2024-07-13 22:20:14.810348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.546 [2024-07-13 22:20:14.810397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.546 [2024-07-13 22:20:14.810426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.546 [2024-07-13 22:20:14.810727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.546 [2024-07-13 22:20:14.811030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.546 [2024-07-13 22:20:14.811058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.546 [2024-07-13 22:20:14.811079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.546 [2024-07-13 22:20:14.815272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.546 [2024-07-13 22:20:14.824151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.546 [2024-07-13 22:20:14.824657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.546 [2024-07-13 22:20:14.824698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.546 [2024-07-13 22:20:14.824724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.546 [2024-07-13 22:20:14.825025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.546 [2024-07-13 22:20:14.825314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.546 [2024-07-13 22:20:14.825346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.546 [2024-07-13 22:20:14.825368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.546 [2024-07-13 22:20:14.829557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.546 [2024-07-13 22:20:14.838679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.546 [2024-07-13 22:20:14.839304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.546 [2024-07-13 22:20:14.839345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.546 [2024-07-13 22:20:14.839371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.546 [2024-07-13 22:20:14.839664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.546 [2024-07-13 22:20:14.839987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.546 [2024-07-13 22:20:14.840015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.546 [2024-07-13 22:20:14.840033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.546 [2024-07-13 22:20:14.844204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.546 [2024-07-13 22:20:14.853230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.547 [2024-07-13 22:20:14.853757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.547 [2024-07-13 22:20:14.853798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.547 [2024-07-13 22:20:14.853823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.547 [2024-07-13 22:20:14.854129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.547 [2024-07-13 22:20:14.854439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.547 [2024-07-13 22:20:14.854470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.547 [2024-07-13 22:20:14.854492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.547 [2024-07-13 22:20:14.858610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.547 [2024-07-13 22:20:14.867644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.547 [2024-07-13 22:20:14.868387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.547 [2024-07-13 22:20:14.868443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.547 [2024-07-13 22:20:14.868472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.547 [2024-07-13 22:20:14.868774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.547 [2024-07-13 22:20:14.869076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.547 [2024-07-13 22:20:14.869103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.547 [2024-07-13 22:20:14.869123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.547 [2024-07-13 22:20:14.873263] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.547 [2024-07-13 22:20:14.882263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.547 [2024-07-13 22:20:14.882815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.547 [2024-07-13 22:20:14.882857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.547 [2024-07-13 22:20:14.882909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.547 [2024-07-13 22:20:14.883213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.547 [2024-07-13 22:20:14.883524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.547 [2024-07-13 22:20:14.883556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.547 [2024-07-13 22:20:14.883579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.547 [2024-07-13 22:20:14.887739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.547 [2024-07-13 22:20:14.896776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.547 [2024-07-13 22:20:14.897321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.547 [2024-07-13 22:20:14.897363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.547 [2024-07-13 22:20:14.897389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.547 [2024-07-13 22:20:14.897680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.547 [2024-07-13 22:20:14.897992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.547 [2024-07-13 22:20:14.898020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.547 [2024-07-13 22:20:14.898038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.547 [2024-07-13 22:20:14.902201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.547 [2024-07-13 22:20:14.911266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.547 [2024-07-13 22:20:14.911809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.547 [2024-07-13 22:20:14.911849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.547 [2024-07-13 22:20:14.911885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.547 [2024-07-13 22:20:14.912192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.547 [2024-07-13 22:20:14.912483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.547 [2024-07-13 22:20:14.912514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.547 [2024-07-13 22:20:14.912536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.547 [2024-07-13 22:20:14.916694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.547 [2024-07-13 22:20:14.924590] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:55.547 [2024-07-13 22:20:14.924637] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:55.547 [2024-07-13 22:20:14.924690] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:55.547 [2024-07-13 22:20:14.924727] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:55.547 [2024-07-13 22:20:14.924768] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:55.547 [2024-07-13 22:20:14.924926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:55.547 [2024-07-13 22:20:14.925025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:55.547 [2024-07-13 22:20:14.925028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:55.547 [2024-07-13 22:20:14.925756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.547 [2024-07-13 22:20:14.926266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.547 [2024-07-13 22:20:14.926303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.547 [2024-07-13 22:20:14.926327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.547 [2024-07-13 22:20:14.926617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.547 [2024-07-13 22:20:14.926901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.547 [2024-07-13 22:20:14.926945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.547 [2024-07-13 22:20:14.926967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.547 [2024-07-13 22:20:14.930835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.807 [2024-07-13 22:20:14.940205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.807 [2024-07-13 22:20:14.940840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.807 [2024-07-13 22:20:14.940894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.807 [2024-07-13 22:20:14.940925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.807 [2024-07-13 22:20:14.941227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.807 [2024-07-13 22:20:14.941486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.807 [2024-07-13 22:20:14.941513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.807 [2024-07-13 22:20:14.941535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.807 [2024-07-13 22:20:14.945568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.807 [2024-07-13 22:20:14.954351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.807 [2024-07-13 22:20:14.954904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.807 [2024-07-13 22:20:14.954942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.807 [2024-07-13 22:20:14.954967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.807 [2024-07-13 22:20:14.955261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.807 [2024-07-13 22:20:14.955514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.807 [2024-07-13 22:20:14.955540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.807 [2024-07-13 22:20:14.955559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.807 [2024-07-13 22:20:14.959403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.807 [2024-07-13 22:20:14.968570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.807 [2024-07-13 22:20:14.969069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.807 [2024-07-13 22:20:14.969106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.807 [2024-07-13 22:20:14.969130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.807 [2024-07-13 22:20:14.969419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.807 [2024-07-13 22:20:14.969690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.807 [2024-07-13 22:20:14.969718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.807 [2024-07-13 22:20:14.969738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.807 [2024-07-13 22:20:14.973577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.807 [2024-07-13 22:20:14.982650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.807 [2024-07-13 22:20:14.983130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.807 [2024-07-13 22:20:14.983166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.807 [2024-07-13 22:20:14.983189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.807 [2024-07-13 22:20:14.983475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.807 [2024-07-13 22:20:14.983724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.807 [2024-07-13 22:20:14.983750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.807 [2024-07-13 22:20:14.983769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.807 [2024-07-13 22:20:14.987508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.807 [2024-07-13 22:20:14.997227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.807 [2024-07-13 22:20:14.997686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.807 [2024-07-13 22:20:14.997724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.807 [2024-07-13 22:20:14.997748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.807 [2024-07-13 22:20:14.998040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.807 [2024-07-13 22:20:14.998329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.808 [2024-07-13 22:20:14.998356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.808 [2024-07-13 22:20:14.998375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.808 [2024-07-13 22:20:15.002261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.808 [2024-07-13 22:20:15.011488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.808 [2024-07-13 22:20:15.012158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.808 [2024-07-13 22:20:15.012208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.808 [2024-07-13 22:20:15.012236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.808 [2024-07-13 22:20:15.012531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.808 [2024-07-13 22:20:15.012787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.808 [2024-07-13 22:20:15.012815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.808 [2024-07-13 22:20:15.012837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.808 [2024-07-13 22:20:15.016641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.808 [2024-07-13 22:20:15.025825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.808 [2024-07-13 22:20:15.026507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.808 [2024-07-13 22:20:15.026564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.808 [2024-07-13 22:20:15.026593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.808 [2024-07-13 22:20:15.026933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.808 [2024-07-13 22:20:15.027236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.808 [2024-07-13 22:20:15.027264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.808 [2024-07-13 22:20:15.027286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.808 [2024-07-13 22:20:15.031052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.808 [2024-07-13 22:20:15.040140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.808 [2024-07-13 22:20:15.040745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.808 [2024-07-13 22:20:15.040787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.808 [2024-07-13 22:20:15.040812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.808 [2024-07-13 22:20:15.041122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.808 [2024-07-13 22:20:15.041412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.808 [2024-07-13 22:20:15.041440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.808 [2024-07-13 22:20:15.041461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.808 [2024-07-13 22:20:15.045241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.808 [2024-07-13 22:20:15.054307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.808 [2024-07-13 22:20:15.054756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.808 [2024-07-13 22:20:15.054792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.808 [2024-07-13 22:20:15.054816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.808 [2024-07-13 22:20:15.055104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.808 [2024-07-13 22:20:15.055372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.808 [2024-07-13 22:20:15.055399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.808 [2024-07-13 22:20:15.055418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.808 [2024-07-13 22:20:15.059179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.808 [2024-07-13 22:20:15.068429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.808 [2024-07-13 22:20:15.068889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.808 [2024-07-13 22:20:15.068926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.808 [2024-07-13 22:20:15.068950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.808 [2024-07-13 22:20:15.069245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.808 [2024-07-13 22:20:15.069500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.808 [2024-07-13 22:20:15.069526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.808 [2024-07-13 22:20:15.069545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.808 [2024-07-13 22:20:15.073330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.808 [2024-07-13 22:20:15.082537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.808 [2024-07-13 22:20:15.083048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.808 [2024-07-13 22:20:15.083085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.808 [2024-07-13 22:20:15.083109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.808 [2024-07-13 22:20:15.083397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.808 [2024-07-13 22:20:15.083647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.808 [2024-07-13 22:20:15.083674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.808 [2024-07-13 22:20:15.083692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.808 [2024-07-13 22:20:15.087500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.808 [2024-07-13 22:20:15.096656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.808 [2024-07-13 22:20:15.097173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.808 [2024-07-13 22:20:15.097210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.808 [2024-07-13 22:20:15.097233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.808 [2024-07-13 22:20:15.097499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.808 [2024-07-13 22:20:15.097745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.808 [2024-07-13 22:20:15.097772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.808 [2024-07-13 22:20:15.097790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.808 [2024-07-13 22:20:15.101645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.808 [2024-07-13 22:20:15.110747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.808 [2024-07-13 22:20:15.111241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.808 [2024-07-13 22:20:15.111277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.808 [2024-07-13 22:20:15.111300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.808 [2024-07-13 22:20:15.111583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.808 [2024-07-13 22:20:15.111827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.808 [2024-07-13 22:20:15.111877] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.808 [2024-07-13 22:20:15.111905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.808 [2024-07-13 22:20:15.115651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.808 [2024-07-13 22:20:15.124815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.808 [2024-07-13 22:20:15.125326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.808 [2024-07-13 22:20:15.125363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.808 [2024-07-13 22:20:15.125387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.808 [2024-07-13 22:20:15.125672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.808 [2024-07-13 22:20:15.125990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.808 [2024-07-13 22:20:15.126019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.808 [2024-07-13 22:20:15.126040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.808 [2024-07-13 22:20:15.129689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.808 [2024-07-13 22:20:15.138941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.808 [2024-07-13 22:20:15.139473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.808 [2024-07-13 22:20:15.139510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.808 [2024-07-13 22:20:15.139533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.808 [2024-07-13 22:20:15.139819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.808 [2024-07-13 22:20:15.140101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.808 [2024-07-13 22:20:15.140129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.808 [2024-07-13 22:20:15.140149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.808 [2024-07-13 22:20:15.143807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.808 [2024-07-13 22:20:15.153378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.808 [2024-07-13 22:20:15.154040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.808 [2024-07-13 22:20:15.154089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.808 [2024-07-13 22:20:15.154116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.808 [2024-07-13 22:20:15.154412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.808 [2024-07-13 22:20:15.154670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.809 [2024-07-13 22:20:15.154697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.809 [2024-07-13 22:20:15.154719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.809 [2024-07-13 22:20:15.158517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.809 [2024-07-13 22:20:15.167549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.809 [2024-07-13 22:20:15.168192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.809 [2024-07-13 22:20:15.168248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.809 [2024-07-13 22:20:15.168276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.809 [2024-07-13 22:20:15.168550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.809 [2024-07-13 22:20:15.168806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.809 [2024-07-13 22:20:15.168833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.809 [2024-07-13 22:20:15.168878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.809 [2024-07-13 22:20:15.172718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.809 [2024-07-13 22:20:15.181774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.809 [2024-07-13 22:20:15.182297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.809 [2024-07-13 22:20:15.182334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.809 [2024-07-13 22:20:15.182357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.809 [2024-07-13 22:20:15.182646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.809 [2024-07-13 22:20:15.182925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.809 [2024-07-13 22:20:15.182954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.809 [2024-07-13 22:20:15.182973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:55.809 [2024-07-13 22:20:15.186720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:55.809 [2024-07-13 22:20:15.195784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:55.809 [2024-07-13 22:20:15.196312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:55.809 [2024-07-13 22:20:15.196349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:55.809 [2024-07-13 22:20:15.196371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:55.809 [2024-07-13 22:20:15.196663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:55.809 [2024-07-13 22:20:15.196981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:55.809 [2024-07-13 22:20:15.197010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:55.809 [2024-07-13 22:20:15.197030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.069 [2024-07-13 22:20:15.201114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.069 [2024-07-13 22:20:15.209885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.069 [2024-07-13 22:20:15.210394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.069 [2024-07-13 22:20:15.210431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:56.069 [2024-07-13 22:20:15.210455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:56.069 [2024-07-13 22:20:15.210746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:56.069 [2024-07-13 22:20:15.211028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:56.069 [2024-07-13 22:20:15.211056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:56.069 [2024-07-13 22:20:15.211075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.069 [2024-07-13 22:20:15.214792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.069 [2024-07-13 22:20:15.224031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.069 [2024-07-13 22:20:15.224488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.069 [2024-07-13 22:20:15.224524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:56.069 [2024-07-13 22:20:15.224548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:56.069 [2024-07-13 22:20:15.224835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:56.069 [2024-07-13 22:20:15.225114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:56.069 [2024-07-13 22:20:15.225142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:56.069 [2024-07-13 22:20:15.225177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.069 [2024-07-13 22:20:15.228826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.069 [2024-07-13 22:20:15.238151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.069 [2024-07-13 22:20:15.238699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.069 [2024-07-13 22:20:15.238753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:56.069 [2024-07-13 22:20:15.238792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:56.069 [2024-07-13 22:20:15.239186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:56.069 [2024-07-13 22:20:15.239586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:56.069 [2024-07-13 22:20:15.239627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:56.069 [2024-07-13 22:20:15.239651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.069 [2024-07-13 22:20:15.244574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.069 [2024-07-13 22:20:15.252314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.069 [2024-07-13 22:20:15.252794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.069 [2024-07-13 22:20:15.252834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:56.069 [2024-07-13 22:20:15.252859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:56.069 [2024-07-13 22:20:15.253136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:56.069 [2024-07-13 22:20:15.253413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:56.069 [2024-07-13 22:20:15.253440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:56.069 [2024-07-13 22:20:15.253466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.069 [2024-07-13 22:20:15.257338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.069 [2024-07-13 22:20:15.266680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.069 [2024-07-13 22:20:15.267226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.069 [2024-07-13 22:20:15.267265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:56.069 [2024-07-13 22:20:15.267289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:56.069 [2024-07-13 22:20:15.267576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:56.069 [2024-07-13 22:20:15.267828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:56.069 [2024-07-13 22:20:15.267878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:56.069 [2024-07-13 22:20:15.267901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.069 [2024-07-13 22:20:15.271819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.069 [2024-07-13 22:20:15.281072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.069 [2024-07-13 22:20:15.281568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.069 [2024-07-13 22:20:15.281605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:56.069 [2024-07-13 22:20:15.281628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:56.069 [2024-07-13 22:20:15.281954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:56.069 [2024-07-13 22:20:15.282238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:56.069 [2024-07-13 22:20:15.282265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:56.069 [2024-07-13 22:20:15.282284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.069 [2024-07-13 22:20:15.286175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.069 [2024-07-13 22:20:15.295237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.069 [2024-07-13 22:20:15.295699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.069 [2024-07-13 22:20:15.295735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:56.069 [2024-07-13 22:20:15.295758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:56.069 [2024-07-13 22:20:15.296039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:56.069 [2024-07-13 22:20:15.296331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:56.069 [2024-07-13 22:20:15.296357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:56.069 [2024-07-13 22:20:15.296376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.069 [2024-07-13 22:20:15.300174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.069 [2024-07-13 22:20:15.309416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.069 [2024-07-13 22:20:15.309920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.069 [2024-07-13 22:20:15.309958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:56.069 [2024-07-13 22:20:15.309982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:56.069 [2024-07-13 22:20:15.310270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:56.069 [2024-07-13 22:20:15.310516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:56.069 [2024-07-13 22:20:15.310542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:56.069 [2024-07-13 22:20:15.310561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.069 [2024-07-13 22:20:15.314410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.069 [2024-07-13 22:20:15.323554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.069 [2024-07-13 22:20:15.324061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.069 [2024-07-13 22:20:15.324097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:56.069 [2024-07-13 22:20:15.324120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:56.069 [2024-07-13 22:20:15.324407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:56.069 [2024-07-13 22:20:15.324651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:56.069 [2024-07-13 22:20:15.324678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:56.069 [2024-07-13 22:20:15.324697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.069 [2024-07-13 22:20:15.328376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.069 [2024-07-13 22:20:15.337626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.069 [2024-07-13 22:20:15.338088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.069 [2024-07-13 22:20:15.338126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:56.069 [2024-07-13 22:20:15.338163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:56.069 [2024-07-13 22:20:15.338446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:56.069 [2024-07-13 22:20:15.338692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:56.070 [2024-07-13 22:20:15.338718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:56.070 [2024-07-13 22:20:15.338737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.070 [2024-07-13 22:20:15.342406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.070 [2024-07-13 22:20:15.351640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.070 [2024-07-13 22:20:15.352110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.070 [2024-07-13 22:20:15.352148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:56.070 [2024-07-13 22:20:15.352171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:56.070 [2024-07-13 22:20:15.352459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:56.070 [2024-07-13 22:20:15.352705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:56.070 [2024-07-13 22:20:15.352731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:56.070 [2024-07-13 22:20:15.352750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.070 [2024-07-13 22:20:15.356434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.070 [2024-07-13 22:20:15.365696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.070 [2024-07-13 22:20:15.366165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.070 [2024-07-13 22:20:15.366202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:56.070 [2024-07-13 22:20:15.366225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:56.070 [2024-07-13 22:20:15.366508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:56.070 [2024-07-13 22:20:15.366752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:56.070 [2024-07-13 22:20:15.366779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:56.070 [2024-07-13 22:20:15.366798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.070 [2024-07-13 22:20:15.370442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.070 [2024-07-13 22:20:15.379707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.070 [2024-07-13 22:20:15.380226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.070 [2024-07-13 22:20:15.380263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:56.070 [2024-07-13 22:20:15.380286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:56.070 [2024-07-13 22:20:15.380571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:56.070 [2024-07-13 22:20:15.380816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:56.070 [2024-07-13 22:20:15.380842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:56.070 [2024-07-13 22:20:15.380887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.070 [2024-07-13 22:20:15.384551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.070 [2024-07-13 22:20:15.393788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.070 [2024-07-13 22:20:15.394253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.070 [2024-07-13 22:20:15.394290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:56.070 [2024-07-13 22:20:15.394313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:56.070 [2024-07-13 22:20:15.394578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:56.070 [2024-07-13 22:20:15.394824] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:56.070 [2024-07-13 22:20:15.394850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:56.070 [2024-07-13 22:20:15.394899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.070 [2024-07-13 22:20:15.398548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.070 [2024-07-13 22:20:15.407747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.070 [2024-07-13 22:20:15.408234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.070 [2024-07-13 22:20:15.408271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:56.070 [2024-07-13 22:20:15.408295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:56.070 [2024-07-13 22:20:15.408576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:56.070 [2024-07-13 22:20:15.408821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:56.070 [2024-07-13 22:20:15.408864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:56.070 [2024-07-13 22:20:15.408894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.070 [2024-07-13 22:20:15.412538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.070 [2024-07-13 22:20:15.421738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.070 [2024-07-13 22:20:15.422256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.070 [2024-07-13 22:20:15.422293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:56.070 [2024-07-13 22:20:15.422317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:56.070 [2024-07-13 22:20:15.422599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:56.070 [2024-07-13 22:20:15.422843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:56.070 [2024-07-13 22:20:15.422895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:56.070 [2024-07-13 22:20:15.422915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.070 [2024-07-13 22:20:15.426559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.070 [2024-07-13 22:20:15.435727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.070 [2024-07-13 22:20:15.436216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.070 [2024-07-13 22:20:15.436253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:56.070 [2024-07-13 22:20:15.436276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:56.070 [2024-07-13 22:20:15.436560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:56.070 [2024-07-13 22:20:15.436806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:56.070 [2024-07-13 22:20:15.436832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:56.070 [2024-07-13 22:20:15.436873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.070 [2024-07-13 22:20:15.440539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.070 [2024-07-13 22:20:15.449795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.070 [2024-07-13 22:20:15.450277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.070 [2024-07-13 22:20:15.450313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:56.070 [2024-07-13 22:20:15.450335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:56.070 [2024-07-13 22:20:15.450635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:56.070 [2024-07-13 22:20:15.450898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:56.070 [2024-07-13 22:20:15.450926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:56.070 [2024-07-13 22:20:15.450945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.070 22:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:56.070 22:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:36:56.070 22:20:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:56.070 22:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:56.070 22:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:56.070 [2024-07-13 22:20:15.454667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.330 [2024-07-13 22:20:15.464115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.330 [2024-07-13 22:20:15.464596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.330 [2024-07-13 22:20:15.464634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:56.330 [2024-07-13 22:20:15.464657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:56.330 [2024-07-13 22:20:15.464973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:56.330 [2024-07-13 22:20:15.465248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:56.330 [2024-07-13 22:20:15.465276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:56.330 [2024-07-13 22:20:15.465295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.330 [2024-07-13 22:20:15.469353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.330 22:20:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:56.330 22:20:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:56.330 22:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.330 22:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:56.330 [2024-07-13 22:20:15.474527] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:56.330 [2024-07-13 22:20:15.478151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.330 [2024-07-13 22:20:15.478649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.330 [2024-07-13 22:20:15.478686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:56.330 [2024-07-13 22:20:15.478709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:56.330 [2024-07-13 22:20:15.479027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:56.330 [2024-07-13 22:20:15.479302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:56.330 [2024-07-13 22:20:15.479333] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:56.330 [2024-07-13 22:20:15.479352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.330 [2024-07-13 22:20:15.483059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.330 [2024-07-13 22:20:15.492248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.330 [2024-07-13 22:20:15.492797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.330 [2024-07-13 22:20:15.492848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:56.330 [2024-07-13 22:20:15.492915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:56.330 [2024-07-13 22:20:15.493297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:56.330 22:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.330 [2024-07-13 22:20:15.493667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:56.330 [2024-07-13 22:20:15.493699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:56.330 [2024-07-13 22:20:15.493720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.330 22:20:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:56.330 22:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.330 22:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:56.330 [2024-07-13 22:20:15.498973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.330 [2024-07-13 22:20:15.506431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.330 [2024-07-13 22:20:15.507058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.330 [2024-07-13 22:20:15.507107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:56.330 [2024-07-13 22:20:15.507134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:56.330 [2024-07-13 22:20:15.507442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:56.330 [2024-07-13 22:20:15.507700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:56.330 [2024-07-13 22:20:15.507727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:56.330 [2024-07-13 22:20:15.507748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.330 [2024-07-13 22:20:15.511859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.330 [2024-07-13 22:20:15.520505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.330 [2024-07-13 22:20:15.521131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.330 [2024-07-13 22:20:15.521184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:56.330 [2024-07-13 22:20:15.521210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:56.330 [2024-07-13 22:20:15.521511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:56.330 [2024-07-13 22:20:15.521765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:56.330 [2024-07-13 22:20:15.521797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:56.330 [2024-07-13 22:20:15.521824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.330 [2024-07-13 22:20:15.525668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.330 [2024-07-13 22:20:15.534625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.330 [2024-07-13 22:20:15.535126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.330 [2024-07-13 22:20:15.535164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:56.330 [2024-07-13 22:20:15.535197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:56.330 [2024-07-13 22:20:15.535484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:56.330 [2024-07-13 22:20:15.535764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:56.330 [2024-07-13 22:20:15.535792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:56.330 [2024-07-13 22:20:15.535810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.330 [2024-07-13 22:20:15.539676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.330 [2024-07-13 22:20:15.549106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.330 [2024-07-13 22:20:15.549611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.330 [2024-07-13 22:20:15.549648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:56.330 [2024-07-13 22:20:15.549670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:56.330 [2024-07-13 22:20:15.549981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:56.330 [2024-07-13 22:20:15.550262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:56.330 [2024-07-13 22:20:15.550288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:56.330 [2024-07-13 22:20:15.550307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.330 [2024-07-13 22:20:15.554074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.330 [2024-07-13 22:20:15.563244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.330 [2024-07-13 22:20:15.563742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.330 [2024-07-13 22:20:15.563778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:56.330 [2024-07-13 22:20:15.563801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:56.330 [2024-07-13 22:20:15.564086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:56.330 [2024-07-13 22:20:15.564359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:56.330 [2024-07-13 22:20:15.564385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:56.330 [2024-07-13 22:20:15.564410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.330 [2024-07-13 22:20:15.568227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.330 Malloc0 00:36:56.330 22:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.330 22:20:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:56.330 22:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.330 22:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:56.330 22:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.330 22:20:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:56.330 22:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.330 22:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:56.330 [2024-07-13 22:20:15.577488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.330 [2024-07-13 22:20:15.577946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:56.330 [2024-07-13 22:20:15.577983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2500 with addr=10.0.0.2, port=4420 00:36:56.330 [2024-07-13 22:20:15.578007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2500 is same with the state(5) to be set 00:36:56.330 [2024-07-13 22:20:15.578293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2500 (9): Bad file descriptor 00:36:56.330 [2024-07-13 22:20:15.578563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:36:56.330 [2024-07-13 22:20:15.578589] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:36:56.330 [2024-07-13 22:20:15.578608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:36:56.330 [2024-07-13 22:20:15.582520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:36:56.330 22:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.331 22:20:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:56.331 22:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:56.331 22:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:56.331 [2024-07-13 22:20:15.588300] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:56.331 [2024-07-13 22:20:15.591653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:36:56.331 22:20:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:56.331 22:20:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 49271 00:36:56.331 [2024-07-13 22:20:15.679223] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:37:06.313 00:37:06.313 Latency(us) 00:37:06.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:06.313 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:37:06.313 Verification LBA range: start 0x0 length 0x4000 00:37:06.313 Nvme1n1 : 15.01 4408.69 17.22 9308.00 0.00 9303.68 1019.45 35535.08 00:37:06.313 =================================================================================================================== 00:37:06.313 Total : 4408.69 17.22 9308.00 0.00 9303.68 1019.45 35535.08 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:06.313 rmmod nvme_tcp 00:37:06.313 rmmod nvme_fabrics 00:37:06.313 rmmod nvme_keyring 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 49953 ']' 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 49953 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 49953 ']' 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 49953 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 49953 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49953' 00:37:06.313 killing process with pid 49953 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 49953 00:37:06.313 22:20:25 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 49953 00:37:07.695 22:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:07.695 22:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:07.695 22:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:07.695 22:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:07.695 22:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:07.695 22:20:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:07.695 22:20:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:07.695 22:20:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:09.602 22:20:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:09.602 00:37:09.602 real 0m26.807s 00:37:09.602 user 1m13.477s 00:37:09.602 sys 0m4.791s 00:37:09.602 22:20:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:09.602 22:20:28 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:37:09.602 ************************************ 00:37:09.602 END TEST nvmf_bdevperf 00:37:09.602 ************************************ 00:37:09.602 22:20:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:37:09.602 22:20:28 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:09.602 22:20:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:09.602 22:20:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:09.602 22:20:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:09.602 ************************************ 00:37:09.602 START TEST nvmf_target_disconnect 00:37:09.602 ************************************ 00:37:09.602 22:20:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:37:09.860 * Looking for test storage... 00:37:09.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:09.860 22:20:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:09.860 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:37:09.860 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:09.860 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:09.860 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:09.860 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:09.860 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:09.860 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:09.860 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:09.860 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:09.860 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:09.860 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:09.860 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:09.860 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:09.860 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:09.860 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:09.860 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:09.860 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:09.860 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:09.860 22:20:29 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:09.860 22:20:29 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:09.860 22:20:29 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:09.860 22:20:29 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:37:09.861 22:20:29 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:11.764 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:11.764 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:11.764 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:11.764 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:11.764 22:20:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:11.764 22:20:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:11.764 22:20:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:11.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:11.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:37:11.764 00:37:11.764 --- 10.0.0.2 ping statistics --- 00:37:11.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:11.764 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:37:11.764 22:20:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:11.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:11.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:37:11.764 00:37:11.765 --- 10.0.0.1 ping statistics --- 00:37:11.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:11.765 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:37:11.765 22:20:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:11.765 22:20:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:37:11.765 22:20:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:11.765 22:20:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:11.765 22:20:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:11.765 22:20:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:11.765 22:20:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:11.765 22:20:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:11.765 22:20:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:11.765 22:20:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:37:11.765 22:20:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:11.765 22:20:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:11.765 22:20:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:11.765 ************************************ 00:37:11.765 START TEST nvmf_target_disconnect_tc1 00:37:11.765 ************************************ 00:37:11.765 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:37:11.765 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:11.765 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:37:11.765 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:11.765 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:11.765 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:11.765 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:11.765 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:11.765 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:11.765 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:11.765 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:37:11.765 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:37:11.765 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:12.024 EAL: No free 2048 kB hugepages reported on node 1 00:37:12.024 [2024-07-13 22:20:31.276810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:12.024 [2024-07-13 22:20:31.276975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2000 with addr=10.0.0.2, port=4420 00:37:12.024 [2024-07-13 22:20:31.277068] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:37:12.024 [2024-07-13 22:20:31.277104] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:12.024 [2024-07-13 22:20:31.277130] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:37:12.024 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:37:12.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:37:12.024 Initializing NVMe Controllers 00:37:12.024 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:37:12.024 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:12.024 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:12.024 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:12.024 00:37:12.024 real 0m0.240s 00:37:12.024 user 0m0.098s 00:37:12.024 sys 0m0.140s 00:37:12.024 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:12.024 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:37:12.024 ************************************ 00:37:12.024 END TEST nvmf_target_disconnect_tc1 00:37:12.024 ************************************ 00:37:12.024 22:20:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:37:12.024 22:20:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:37:12.024 22:20:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:12.024 22:20:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:12.024 22:20:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:12.024 ************************************ 00:37:12.024 START TEST nvmf_target_disconnect_tc2 00:37:12.024 ************************************ 00:37:12.024 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:37:12.024 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:37:12.024 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:12.024 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:12.024 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:12.024 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:12.024 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=53352 00:37:12.024 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:12.024 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 53352 00:37:12.024 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 53352 ']' 00:37:12.024 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:12.024 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:12.024 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:12.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:12.024 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:12.024 22:20:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:12.284 [2024-07-13 22:20:31.460731] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:12.284 [2024-07-13 22:20:31.460873] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:12.284 EAL: No free 2048 kB hugepages reported on node 1 00:37:12.284 [2024-07-13 22:20:31.598517] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:12.545 [2024-07-13 22:20:31.830042] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:12.545 [2024-07-13 22:20:31.830106] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:12.545 [2024-07-13 22:20:31.830129] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:12.545 [2024-07-13 22:20:31.830147] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:12.545 [2024-07-13 22:20:31.830179] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:12.545 [2024-07-13 22:20:31.830309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:37:12.545 [2024-07-13 22:20:31.830375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:37:12.545 [2024-07-13 22:20:31.830411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:37:12.545 [2024-07-13 22:20:31.830436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:37:13.113 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:13.113 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:37:13.113 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:13.113 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:13.113 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:13.113 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:13.113 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:13.113 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.113 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:13.373 Malloc0 00:37:13.373 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.373 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:13.373 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.373 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:13.373 [2024-07-13 22:20:32.523625] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:13.373 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.373 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:13.373 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.373 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:13.373 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.373 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:13.373 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.373 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:13.373 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.373 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:13.373 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.373 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:13.373 [2024-07-13 22:20:32.553327] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:13.373 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.373 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:13.373 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:13.373 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:13.373 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:13.373 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=53502 00:37:13.373 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:37:13.373 22:20:32 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:13.373 EAL: No free 2048 kB hugepages reported on node 1 00:37:15.282 22:20:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 53352 00:37:15.282 22:20:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Write completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Write completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Write completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Write completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Write completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Write completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Write completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Write completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Write completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 [2024-07-13 22:20:34.592929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Write completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Write completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Write completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Write completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Write completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Write completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Write completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Write completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Write completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Write completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Write completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Write completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Write completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Write completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 [2024-07-13 22:20:34.593566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.282 Read completed with error (sct=0, sc=8) 00:37:15.282 starting I/O failed 00:37:15.283 Read completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Read completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Read completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Read completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Read completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Read completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Read completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Read completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Write completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Read completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Write completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Write completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Write completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Write completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Write completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Read completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Write completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Read completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Read completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Read completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Write completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Read completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Write completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Read completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Read completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Write completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Read completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Read completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 Read completed with error (sct=0, sc=8) 00:37:15.283 starting I/O failed 00:37:15.283 [2024-07-13 22:20:34.594204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:37:15.283 [2024-07-13 22:20:34.594566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-07-13 22:20:34.594607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.283 qpair failed and we were unable to recover it. 00:37:15.283 [2024-07-13 22:20:34.594838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-07-13 22:20:34.594889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.283 qpair failed and we were unable to recover it. 00:37:15.283 [2024-07-13 22:20:34.595068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-07-13 22:20:34.595102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.283 qpair failed and we were unable to recover it. 00:37:15.283 [2024-07-13 22:20:34.595345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-07-13 22:20:34.595378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.283 qpair failed and we were unable to recover it. 00:37:15.283 [2024-07-13 22:20:34.595580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-07-13 22:20:34.595632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.283 qpair failed and we were unable to recover it. 00:37:15.283 [2024-07-13 22:20:34.595846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-07-13 22:20:34.595910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.283 qpair failed and we were unable to recover it. 00:37:15.283 [2024-07-13 22:20:34.596112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-07-13 22:20:34.596147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.283 qpair failed and we were unable to recover it. 00:37:15.283 [2024-07-13 22:20:34.596347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-07-13 22:20:34.596381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.283 qpair failed and we were unable to recover it. 00:37:15.283 [2024-07-13 22:20:34.596587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-07-13 22:20:34.596620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.283 qpair failed and we were unable to recover it. 00:37:15.283 [2024-07-13 22:20:34.596855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-07-13 22:20:34.596902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.283 qpair failed and we were unable to recover it. 00:37:15.283 [2024-07-13 22:20:34.597130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-07-13 22:20:34.597172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.283 qpair failed and we were unable to recover it. 00:37:15.283 [2024-07-13 22:20:34.597414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-07-13 22:20:34.597464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.283 qpair failed and we were unable to recover it. 00:37:15.283 [2024-07-13 22:20:34.597683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-07-13 22:20:34.597716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.283 qpair failed and we were unable to recover it. 00:37:15.283 [2024-07-13 22:20:34.597919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-07-13 22:20:34.597953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.283 qpair failed and we were unable to recover it. 00:37:15.283 [2024-07-13 22:20:34.598164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-07-13 22:20:34.598197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.283 qpair failed and we were unable to recover it. 00:37:15.283 [2024-07-13 22:20:34.598436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-07-13 22:20:34.598486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.283 qpair failed and we were unable to recover it. 00:37:15.283 [2024-07-13 22:20:34.598654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-07-13 22:20:34.598688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.283 qpair failed and we were unable to recover it. 00:37:15.283 [2024-07-13 22:20:34.598902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-07-13 22:20:34.598936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.283 qpair failed and we were unable to recover it. 00:37:15.283 [2024-07-13 22:20:34.599107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-07-13 22:20:34.599156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.283 qpair failed and we were unable to recover it. 00:37:15.283 [2024-07-13 22:20:34.599367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.283 [2024-07-13 22:20:34.599400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.283 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.599608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.599660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.599858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.599901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.600101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.600134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.600379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.600430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.600658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.600690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.600952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.600987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.601200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.601234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.601445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.601477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.601678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.601711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.601922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.601975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.602176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.602225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.602462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.602495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.602681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.602716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.602927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.602962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.603130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.603163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.603361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.603395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.603618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.603651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.603839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.603885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.604056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.604090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.604357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.604408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.604699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.604732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.604941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.604993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.605258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.605291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.605477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.605510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.605744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.605777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.605958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.606009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.606201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.606251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.606463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.606515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.606768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.284 [2024-07-13 22:20:34.606801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.284 qpair failed and we were unable to recover it. 00:37:15.284 [2024-07-13 22:20:34.606995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.607029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.607218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.607251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.607482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.607531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.607757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.607790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.608008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.608060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.608320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.608353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.608539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.608589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.608803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.608836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.609062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.609097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.609570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.609629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.609824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.609857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.610033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.610067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.610243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.610276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.610488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.610525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.610762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.610807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.611007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.611045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.611284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.611334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.611610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.611661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.611819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.611852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.612049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.612082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.612388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.612438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.612714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.612766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.613025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.613059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.613260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.613311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.613529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.613581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.613814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.613863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.614145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.614177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.614403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.614436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.614673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.614724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.614917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.614956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.285 qpair failed and we were unable to recover it. 00:37:15.285 [2024-07-13 22:20:34.615182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.285 [2024-07-13 22:20:34.615215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.615399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.615432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.615669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.615701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.615913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.615964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.616182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.616232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.616452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.616485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.616669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.616702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.616913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.616966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.617222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.617255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.617445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.617478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.617656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.617689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.617863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.617903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.618093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.618145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.618383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.618434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.618699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.618732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.618950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.619001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.619219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.619270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.619447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.619497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.619728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.619761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.619979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.620047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.620294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.620345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.620540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.620590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.620814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.620847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.621038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.621094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.621315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.621348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.621583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.621637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.621821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.621876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.622100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.622152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.622387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.622438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.622729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.622780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.286 [2024-07-13 22:20:34.623006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.286 [2024-07-13 22:20:34.623040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.286 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.623228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.623279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.623506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.623539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.623727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.623760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.623971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.624022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.624253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.624310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.624531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.624581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.624836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.624875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.625224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.625274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.625501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.625552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.625739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.625772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.625997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.626030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.626265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.626314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.626547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.626597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.626791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.626824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.627099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.627133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.627341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.627392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.627641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.627692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.627887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.627938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.628199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.628232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.628424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.628458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.628673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.628724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.628942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.628975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.629163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.629196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.629405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.629456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.629640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.629692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.629879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.629913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.630112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.630145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.630346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.630397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.287 qpair failed and we were unable to recover it. 00:37:15.287 [2024-07-13 22:20:34.630618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.287 [2024-07-13 22:20:34.630669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.288 qpair failed and we were unable to recover it. 00:37:15.288 [2024-07-13 22:20:34.630884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.288 [2024-07-13 22:20:34.630917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.288 qpair failed and we were unable to recover it. 00:37:15.288 [2024-07-13 22:20:34.631121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.288 [2024-07-13 22:20:34.631172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.288 qpair failed and we were unable to recover it. 00:37:15.288 [2024-07-13 22:20:34.631440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.288 [2024-07-13 22:20:34.631472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.288 qpair failed and we were unable to recover it. 00:37:15.288 [2024-07-13 22:20:34.631701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.288 [2024-07-13 22:20:34.631736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.288 qpair failed and we were unable to recover it. 00:37:15.288 [2024-07-13 22:20:34.631979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.288 [2024-07-13 22:20:34.632031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.288 qpair failed and we were unable to recover it. 00:37:15.288 [2024-07-13 22:20:34.632236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.288 [2024-07-13 22:20:34.632291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.288 qpair failed and we were unable to recover it. 00:37:15.288 [2024-07-13 22:20:34.632531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.288 [2024-07-13 22:20:34.632582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.288 qpair failed and we were unable to recover it. 00:37:15.288 [2024-07-13 22:20:34.632800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.288 [2024-07-13 22:20:34.632833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.288 qpair failed and we were unable to recover it. 00:37:15.288 [2024-07-13 22:20:34.633022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.288 [2024-07-13 22:20:34.633073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.288 qpair failed and we were unable to recover it. 00:37:15.288 [2024-07-13 22:20:34.633318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.288 [2024-07-13 22:20:34.633369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.288 qpair failed and we were unable to recover it. 00:37:15.288 [2024-07-13 22:20:34.633550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.288 [2024-07-13 22:20:34.633583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.288 qpair failed and we were unable to recover it. 00:37:15.288 [2024-07-13 22:20:34.633742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.288 [2024-07-13 22:20:34.633775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.288 qpair failed and we were unable to recover it. 00:37:15.288 [2024-07-13 22:20:34.634014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.288 [2024-07-13 22:20:34.634065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.288 qpair failed and we were unable to recover it. 00:37:15.288 [2024-07-13 22:20:34.634287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.288 [2024-07-13 22:20:34.634321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.288 qpair failed and we were unable to recover it. 00:37:15.288 [2024-07-13 22:20:34.634541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.288 [2024-07-13 22:20:34.634592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.288 qpair failed and we were unable to recover it. 00:37:15.288 [2024-07-13 22:20:34.634814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.288 [2024-07-13 22:20:34.634847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.288 qpair failed and we were unable to recover it. 00:37:15.288 [2024-07-13 22:20:34.635066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.288 [2024-07-13 22:20:34.635115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.288 qpair failed and we were unable to recover it. 00:37:15.288 [2024-07-13 22:20:34.635300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.635351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.635559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.635611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.635811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.635844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.636064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.636117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.636337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.636387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.636626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.636676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.636894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.636928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.637169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.637220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.637457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.637507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.637749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.637799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.637990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.638024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.638211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.638262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.638470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.638519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.638715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.638749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.638971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.639023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.639233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.639283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.639523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.639574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.639787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.639819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.640013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.640066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.640272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.640323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.640534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.640584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.640799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.640832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.641054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.641106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.641350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.641402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.641629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.641679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.641910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.641947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.642182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.642233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.642482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.642532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.642731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.642768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.642959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.643002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.643251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.643303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.643516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.289 [2024-07-13 22:20:34.643567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.289 qpair failed and we were unable to recover it. 00:37:15.289 [2024-07-13 22:20:34.643761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.643794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.644005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.644056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.644246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.644297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.644486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.644547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.644761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.644794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.645008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.645059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.645270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.645323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.645531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.645581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.645750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.645783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.645995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.646046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.646263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.646315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.646555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.646606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.646829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.646861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.647119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.647169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.647419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.647471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.647684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.647735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.647951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.647984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.648163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.648214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.648458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.648510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.648675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.648710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.648937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.648972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.649186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.649236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.649451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.649501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.649726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.649760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.649976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.650028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.650239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.650289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.650521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.650572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.650752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.650785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.651012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.651064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.651244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.651296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.651544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.290 [2024-07-13 22:20:34.651595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.290 qpair failed and we were unable to recover it. 00:37:15.290 [2024-07-13 22:20:34.651780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.651813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.652064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.652118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.652334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.652384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.652636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.652687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.652877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.652910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.653158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.653213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.653463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.653513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.653731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.653781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.654007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.654041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.654212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.654263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.654485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.654535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.654729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.654763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.654972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.655023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.655236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.655287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.655513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.655565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.655729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.655765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.655982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.656034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.656253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.656305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.656519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.656568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.656795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.656829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.657077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.657127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.657370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.657421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.657660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.657711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.657957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.658014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.658206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.658241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.658430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.658482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.658645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.658679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.658877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.658911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.659113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.659165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.659375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.291 [2024-07-13 22:20:34.659427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.291 qpair failed and we were unable to recover it. 00:37:15.291 [2024-07-13 22:20:34.659640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.659691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.659890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.659924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.660138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.660189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.660402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.660452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.660694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.660745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.660960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.661012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.661191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.661252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.661462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.661513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.661712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.661746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.661929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.661967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.662195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.662246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.662490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.662541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.662753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.662786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.663006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.663057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.663298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.663349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.663564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.663618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.663784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.663817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.664036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.664089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.664333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.664384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.664620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.664671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.664856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.664896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.665116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.665168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.665393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.665443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.665622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.665674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.665861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.665902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.666085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.666136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.666359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.666411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.666625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.666678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.666895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.666929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.292 [2024-07-13 22:20:34.667147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.292 [2024-07-13 22:20:34.667198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.292 qpair failed and we were unable to recover it. 00:37:15.293 [2024-07-13 22:20:34.667378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.293 [2024-07-13 22:20:34.667434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.293 qpair failed and we were unable to recover it. 00:37:15.293 [2024-07-13 22:20:34.667676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.293 [2024-07-13 22:20:34.667733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.293 qpair failed and we were unable to recover it. 00:37:15.293 [2024-07-13 22:20:34.667922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.293 [2024-07-13 22:20:34.667957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.293 qpair failed and we were unable to recover it. 00:37:15.293 [2024-07-13 22:20:34.668181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.293 [2024-07-13 22:20:34.668232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.293 qpair failed and we were unable to recover it. 00:37:15.293 [2024-07-13 22:20:34.668433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.293 [2024-07-13 22:20:34.668483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.293 qpair failed and we were unable to recover it. 00:37:15.293 [2024-07-13 22:20:34.668676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.293 [2024-07-13 22:20:34.668711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.293 qpair failed and we were unable to recover it. 00:37:15.293 [2024-07-13 22:20:34.668945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.293 [2024-07-13 22:20:34.669004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.293 qpair failed and we were unable to recover it. 00:37:15.293 [2024-07-13 22:20:34.669245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.293 [2024-07-13 22:20:34.669296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.293 qpair failed and we were unable to recover it. 00:37:15.293 [2024-07-13 22:20:34.669539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.293 [2024-07-13 22:20:34.669590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.293 qpair failed and we were unable to recover it. 00:37:15.293 [2024-07-13 22:20:34.669788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.293 [2024-07-13 22:20:34.669821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.293 qpair failed and we were unable to recover it. 00:37:15.293 [2024-07-13 22:20:34.670042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.293 [2024-07-13 22:20:34.670095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.293 qpair failed and we were unable to recover it. 00:37:15.293 [2024-07-13 22:20:34.670342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.293 [2024-07-13 22:20:34.670392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.293 qpair failed and we were unable to recover it. 00:37:15.293 [2024-07-13 22:20:34.670605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.293 [2024-07-13 22:20:34.670655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.293 qpair failed and we were unable to recover it. 00:37:15.293 [2024-07-13 22:20:34.670855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.293 [2024-07-13 22:20:34.670897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.293 qpair failed and we were unable to recover it. 00:37:15.293 [2024-07-13 22:20:34.671108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.293 [2024-07-13 22:20:34.671159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.293 qpair failed and we were unable to recover it. 00:37:15.568 [2024-07-13 22:20:34.671380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.568 [2024-07-13 22:20:34.671431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.568 qpair failed and we were unable to recover it. 00:37:15.568 [2024-07-13 22:20:34.671670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.568 [2024-07-13 22:20:34.671722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.568 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.671932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.671984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.672227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.672262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.672453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.672504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.672693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.672726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.672887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.672920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.673130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.673187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.673429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.673479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.673674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.673707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.673932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.673970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.674191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.674226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.674508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.674559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.674779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.674812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.675036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.675087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.675305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.675356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.675559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.675611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.675824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.675857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.676089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.676139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.676355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.676405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.676620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.676670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.676859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.676901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.677146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.677197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.677410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.677459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.677744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.677796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.677959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.678022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.678271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.678321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.678531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.678580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.678801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.678834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.679001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.679035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.679273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.569 [2024-07-13 22:20:34.679325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.569 qpair failed and we were unable to recover it. 00:37:15.569 [2024-07-13 22:20:34.679544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.679594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.679785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.679819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.680039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.680089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.680302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.680352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.680567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.680618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.680838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.680876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.681094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.681144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.681347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.681398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.681637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.681687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.681927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.681981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.682200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.682251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.682465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.682514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.682699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.682749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.682930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.682964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.683218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.683269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.683464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.683515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.683731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.683763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.684010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.684061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.684279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.684330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.684538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.684592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.684756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.684789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.685011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.685063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.685302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.685352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.685562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.685612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.685802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.685835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.686067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.686118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.686332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.686383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.686584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.686634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.686847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.686887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.687137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.570 [2024-07-13 22:20:34.687188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.570 qpair failed and we were unable to recover it. 00:37:15.570 [2024-07-13 22:20:34.687405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.687455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.687701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.687752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.687915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.687949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.688177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.688229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.688477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.688528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.688732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.688766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.689007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.689059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.689269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.689320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.689538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.689587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.689813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.689846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.690070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.690122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.690375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.690426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.690665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.690715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.690949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.691000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.691162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.691201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.691429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.691464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.691703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.691754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.691937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.691973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.692168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.692205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.692415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.692452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.692661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.692697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.692944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.692977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.693179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.693216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.693427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.693463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.693636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.693671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.693871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.693907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.694107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.694140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.694321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.694372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.694626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.694677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.694893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.694931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.695158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.695192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.571 [2024-07-13 22:20:34.695432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.571 [2024-07-13 22:20:34.695483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.571 qpair failed and we were unable to recover it. 00:37:15.572 [2024-07-13 22:20:34.695701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.572 [2024-07-13 22:20:34.695751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.572 qpair failed and we were unable to recover it. 00:37:15.572 [2024-07-13 22:20:34.695924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.572 [2024-07-13 22:20:34.695958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:15.572 qpair failed and we were unable to recover it. 00:37:15.572 [2024-07-13 22:20:34.696216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.572 [2024-07-13 22:20:34.696269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.572 qpair failed and we were unable to recover it. 00:37:15.572 [2024-07-13 22:20:34.696524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.572 [2024-07-13 22:20:34.696564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.572 qpair failed and we were unable to recover it. 00:37:15.572 [2024-07-13 22:20:34.696739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.572 [2024-07-13 22:20:34.696776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.572 qpair failed and we were unable to recover it. 00:37:15.572 [2024-07-13 22:20:34.696963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.572 [2024-07-13 22:20:34.696997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.572 qpair failed and we were unable to recover it. 00:37:15.572 [2024-07-13 22:20:34.697207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.572 [2024-07-13 22:20:34.697245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.572 qpair failed and we were unable to recover it. 00:37:15.572 [2024-07-13 22:20:34.697489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.572 [2024-07-13 22:20:34.697527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.572 qpair failed and we were unable to recover it. 00:37:15.572 [2024-07-13 22:20:34.697763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.572 [2024-07-13 22:20:34.697799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.572 qpair failed and we were unable to recover it. 00:37:15.572 [2024-07-13 22:20:34.698013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.572 [2024-07-13 22:20:34.698047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.572 qpair failed and we were unable to recover it. 00:37:15.572 [2024-07-13 22:20:34.698298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.572 [2024-07-13 22:20:34.698334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.572 qpair failed and we were unable to recover it. 00:37:15.572 [2024-07-13 22:20:34.698601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.572 [2024-07-13 22:20:34.698638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.572 qpair failed and we were unable to recover it. 00:37:15.572 [2024-07-13 22:20:34.698819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.572 [2024-07-13 22:20:34.698855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.572 qpair failed and we were unable to recover it. 00:37:15.572 [2024-07-13 22:20:34.699073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.572 [2024-07-13 22:20:34.699106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.572 qpair failed and we were unable to recover it. 00:37:15.572 [2024-07-13 22:20:34.699319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.572 [2024-07-13 22:20:34.699355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.572 qpair failed and we were unable to recover it. 00:37:15.572 [2024-07-13 22:20:34.699559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.572 [2024-07-13 22:20:34.699595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.572 qpair failed and we were unable to recover it. 00:37:15.572 [2024-07-13 22:20:34.699830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.572 [2024-07-13 22:20:34.699874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.572 qpair failed and we were unable to recover it. 00:37:15.572 [2024-07-13 22:20:34.700084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.572 [2024-07-13 22:20:34.700118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.572 qpair failed and we were unable to recover it. 00:37:15.572 [2024-07-13 22:20:34.700334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.572 [2024-07-13 22:20:34.700370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.572 qpair failed and we were unable to recover it. 00:37:15.572 [2024-07-13 22:20:34.700561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.572 [2024-07-13 22:20:34.700597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.572 qpair failed and we were unable to recover it. 00:37:15.572 [2024-07-13 22:20:34.700823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.572 [2024-07-13 22:20:34.700859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.572 qpair failed and we were unable to recover it. 00:37:15.572 [2024-07-13 22:20:34.701066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.572 [2024-07-13 22:20:34.701099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.572 qpair failed and we were unable to recover it. 00:37:15.572 [2024-07-13 22:20:34.701321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.572 [2024-07-13 22:20:34.701358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.572 qpair failed and we were unable to recover it. 00:37:15.572 [2024-07-13 22:20:34.701626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.572 [2024-07-13 22:20:34.701662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.572 qpair failed and we were unable to recover it. 00:37:15.572 Read completed with error (sct=0, sc=8) 00:37:15.572 starting I/O failed 00:37:15.572 Read completed with error (sct=0, sc=8) 00:37:15.572 starting I/O failed 00:37:15.572 Read completed with error (sct=0, sc=8) 00:37:15.572 starting I/O failed 00:37:15.572 Read completed with error (sct=0, sc=8) 00:37:15.572 starting I/O failed 00:37:15.572 Read completed with error (sct=0, sc=8) 00:37:15.572 starting I/O failed 00:37:15.572 Read completed with error (sct=0, sc=8) 00:37:15.572 starting I/O failed 00:37:15.572 Read completed with error (sct=0, sc=8) 00:37:15.572 starting I/O failed 00:37:15.572 Read completed with error (sct=0, sc=8) 00:37:15.572 starting I/O failed 00:37:15.572 Write completed with error (sct=0, sc=8) 00:37:15.572 starting I/O failed 00:37:15.572 Read completed with error (sct=0, sc=8) 00:37:15.572 starting I/O failed 00:37:15.572 Write completed with error (sct=0, sc=8) 00:37:15.572 starting I/O failed 00:37:15.572 Write completed with error (sct=0, sc=8) 00:37:15.572 starting I/O failed 00:37:15.572 Read completed with error (sct=0, sc=8) 00:37:15.572 starting I/O failed 00:37:15.572 Read completed with error (sct=0, sc=8) 00:37:15.572 starting I/O failed 00:37:15.572 Write completed with error (sct=0, sc=8) 00:37:15.572 starting I/O failed 00:37:15.572 Write completed with error (sct=0, sc=8) 00:37:15.572 starting I/O failed 00:37:15.572 Read completed with error (sct=0, sc=8) 00:37:15.572 starting I/O failed 00:37:15.572 Write completed with error (sct=0, sc=8) 00:37:15.572 starting I/O failed 00:37:15.572 Write completed with error (sct=0, sc=8) 00:37:15.572 starting I/O failed 00:37:15.572 Write completed with error (sct=0, sc=8) 00:37:15.572 starting I/O failed 00:37:15.572 Read completed with error (sct=0, sc=8) 00:37:15.572 starting I/O failed 00:37:15.572 Write completed with error (sct=0, sc=8) 00:37:15.572 starting I/O failed 00:37:15.572 Read completed with error (sct=0, sc=8) 00:37:15.572 starting I/O failed 00:37:15.572 Write completed with error (sct=0, sc=8) 00:37:15.573 starting I/O failed 00:37:15.573 Write completed with error (sct=0, sc=8) 00:37:15.573 starting I/O failed 00:37:15.573 Read completed with error (sct=0, sc=8) 00:37:15.573 starting I/O failed 00:37:15.573 Write completed with error (sct=0, sc=8) 00:37:15.573 starting I/O failed 00:37:15.573 Read completed with error (sct=0, sc=8) 00:37:15.573 starting I/O failed 00:37:15.573 Write completed with error (sct=0, sc=8) 00:37:15.573 starting I/O failed 00:37:15.573 Read completed with error (sct=0, sc=8) 00:37:15.573 starting I/O failed 00:37:15.573 Write completed with error (sct=0, sc=8) 00:37:15.573 starting I/O failed 00:37:15.573 Read completed with error (sct=0, sc=8) 00:37:15.573 starting I/O failed 00:37:15.573 [2024-07-13 22:20:34.702339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:15.573 [2024-07-13 22:20:34.702592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.573 [2024-07-13 22:20:34.702647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.573 qpair failed and we were unable to recover it. 00:37:15.573 [2024-07-13 22:20:34.702954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.573 [2024-07-13 22:20:34.702991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.573 qpair failed and we were unable to recover it. 00:37:15.573 [2024-07-13 22:20:34.703238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.573 [2024-07-13 22:20:34.703276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.573 qpair failed and we were unable to recover it. 00:37:15.573 [2024-07-13 22:20:34.703487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.573 [2024-07-13 22:20:34.703526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.573 qpair failed and we were unable to recover it. 00:37:15.573 [2024-07-13 22:20:34.703790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.573 [2024-07-13 22:20:34.703827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.573 qpair failed and we were unable to recover it. 00:37:15.573 [2024-07-13 22:20:34.704050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.573 [2024-07-13 22:20:34.704084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.573 qpair failed and we were unable to recover it. 00:37:15.573 [2024-07-13 22:20:34.704300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.573 [2024-07-13 22:20:34.704338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.573 qpair failed and we were unable to recover it. 00:37:15.573 [2024-07-13 22:20:34.704595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.573 [2024-07-13 22:20:34.704633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.573 qpair failed and we were unable to recover it. 00:37:15.573 [2024-07-13 22:20:34.704863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.573 [2024-07-13 22:20:34.704907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.573 qpair failed and we were unable to recover it. 00:37:15.573 [2024-07-13 22:20:34.705108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.573 [2024-07-13 22:20:34.705142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.573 qpair failed and we were unable to recover it. 00:37:15.573 [2024-07-13 22:20:34.705358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.573 [2024-07-13 22:20:34.705392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.573 qpair failed and we were unable to recover it. 00:37:15.573 [2024-07-13 22:20:34.705606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.573 [2024-07-13 22:20:34.705642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.573 qpair failed and we were unable to recover it. 00:37:15.573 [2024-07-13 22:20:34.705825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.573 [2024-07-13 22:20:34.705864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.573 qpair failed and we were unable to recover it. 00:37:15.573 [2024-07-13 22:20:34.706080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.573 [2024-07-13 22:20:34.706113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.573 qpair failed and we were unable to recover it. 00:37:15.573 [2024-07-13 22:20:34.706304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.573 [2024-07-13 22:20:34.706356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.573 qpair failed and we were unable to recover it. 00:37:15.573 [2024-07-13 22:20:34.706705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.573 [2024-07-13 22:20:34.706768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.573 qpair failed and we were unable to recover it. 00:37:15.573 [2024-07-13 22:20:34.706976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.573 [2024-07-13 22:20:34.707009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.573 qpair failed and we were unable to recover it. 00:37:15.573 [2024-07-13 22:20:34.707193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.573 [2024-07-13 22:20:34.707226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.573 qpair failed and we were unable to recover it. 00:37:15.573 [2024-07-13 22:20:34.707472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.573 [2024-07-13 22:20:34.707509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.573 qpair failed and we were unable to recover it. 00:37:15.573 [2024-07-13 22:20:34.707783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.573 [2024-07-13 22:20:34.707821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.573 qpair failed and we were unable to recover it. 00:37:15.573 [2024-07-13 22:20:34.708045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.573 [2024-07-13 22:20:34.708080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.573 qpair failed and we were unable to recover it. 00:37:15.573 [2024-07-13 22:20:34.708293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.573 [2024-07-13 22:20:34.708343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.573 qpair failed and we were unable to recover it. 00:37:15.573 [2024-07-13 22:20:34.708632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.573 [2024-07-13 22:20:34.708670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.573 qpair failed and we were unable to recover it. 00:37:15.573 [2024-07-13 22:20:34.708853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.573 [2024-07-13 22:20:34.708898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.573 qpair failed and we were unable to recover it. 00:37:15.573 [2024-07-13 22:20:34.709106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.573 [2024-07-13 22:20:34.709155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.573 qpair failed and we were unable to recover it. 00:37:15.574 [2024-07-13 22:20:34.709415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.574 [2024-07-13 22:20:34.709452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.574 qpair failed and we were unable to recover it. 00:37:15.574 [2024-07-13 22:20:34.709697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.574 [2024-07-13 22:20:34.709735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.574 qpair failed and we were unable to recover it. 00:37:15.574 [2024-07-13 22:20:34.709932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.574 [2024-07-13 22:20:34.709966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.574 qpair failed and we were unable to recover it. 00:37:15.574 [2024-07-13 22:20:34.710149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.574 [2024-07-13 22:20:34.710182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.574 qpair failed and we were unable to recover it. 00:37:15.574 [2024-07-13 22:20:34.710416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.574 [2024-07-13 22:20:34.710453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.574 qpair failed and we were unable to recover it. 00:37:15.574 [2024-07-13 22:20:34.710722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.574 [2024-07-13 22:20:34.710760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.574 qpair failed and we were unable to recover it. 00:37:15.574 [2024-07-13 22:20:34.710972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.574 [2024-07-13 22:20:34.711006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.574 qpair failed and we were unable to recover it. 00:37:15.574 [2024-07-13 22:20:34.711174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.574 [2024-07-13 22:20:34.711206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.574 qpair failed and we were unable to recover it. 00:37:15.574 [2024-07-13 22:20:34.711396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.574 [2024-07-13 22:20:34.711451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.574 qpair failed and we were unable to recover it. 00:37:15.574 [2024-07-13 22:20:34.711655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.574 [2024-07-13 22:20:34.711693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.574 qpair failed and we were unable to recover it. 00:37:15.574 [2024-07-13 22:20:34.711897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.574 [2024-07-13 22:20:34.711948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.574 qpair failed and we were unable to recover it. 00:37:15.574 [2024-07-13 22:20:34.712136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.574 [2024-07-13 22:20:34.712190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.574 qpair failed and we were unable to recover it. 00:37:15.574 [2024-07-13 22:20:34.712448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.574 [2024-07-13 22:20:34.712484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.574 qpair failed and we were unable to recover it. 00:37:15.574 [2024-07-13 22:20:34.712726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.574 [2024-07-13 22:20:34.712763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.574 qpair failed and we were unable to recover it. 00:37:15.574 [2024-07-13 22:20:34.712975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.574 [2024-07-13 22:20:34.713010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.574 qpair failed and we were unable to recover it. 00:37:15.574 [2024-07-13 22:20:34.713169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.574 [2024-07-13 22:20:34.713202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.574 qpair failed and we were unable to recover it. 00:37:15.574 [2024-07-13 22:20:34.713416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.574 [2024-07-13 22:20:34.713452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.574 qpair failed and we were unable to recover it. 00:37:15.576 [2024-07-13 22:20:34.713697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.576 [2024-07-13 22:20:34.713735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.576 qpair failed and we were unable to recover it. 00:37:15.576 [2024-07-13 22:20:34.713971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.576 [2024-07-13 22:20:34.714005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.576 qpair failed and we were unable to recover it. 00:37:15.576 [2024-07-13 22:20:34.714188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.576 [2024-07-13 22:20:34.714227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.576 qpair failed and we were unable to recover it. 00:37:15.576 [2024-07-13 22:20:34.714536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.576 [2024-07-13 22:20:34.714591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.576 qpair failed and we were unable to recover it. 00:37:15.576 [2024-07-13 22:20:34.714796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.576 [2024-07-13 22:20:34.714834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.576 qpair failed and we were unable to recover it. 00:37:15.576 [2024-07-13 22:20:34.715057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.576 [2024-07-13 22:20:34.715092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.576 qpair failed and we were unable to recover it. 00:37:15.576 [2024-07-13 22:20:34.715340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.576 [2024-07-13 22:20:34.715378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.576 qpair failed and we were unable to recover it. 00:37:15.576 [2024-07-13 22:20:34.715710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.576 [2024-07-13 22:20:34.715767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.576 qpair failed and we were unable to recover it. 00:37:15.576 [2024-07-13 22:20:34.715988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.576 [2024-07-13 22:20:34.716022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.576 qpair failed and we were unable to recover it. 00:37:15.576 [2024-07-13 22:20:34.716184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.576 [2024-07-13 22:20:34.716219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.576 qpair failed and we were unable to recover it. 00:37:15.576 [2024-07-13 22:20:34.716403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.576 [2024-07-13 22:20:34.716436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.576 qpair failed and we were unable to recover it. 00:37:15.576 [2024-07-13 22:20:34.716639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.576 [2024-07-13 22:20:34.716676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.576 qpair failed and we were unable to recover it. 00:37:15.576 [2024-07-13 22:20:34.716887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.576 [2024-07-13 22:20:34.716938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.576 qpair failed and we were unable to recover it. 00:37:15.576 [2024-07-13 22:20:34.717125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.576 [2024-07-13 22:20:34.717160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.576 qpair failed and we were unable to recover it. 00:37:15.576 [2024-07-13 22:20:34.717369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.576 [2024-07-13 22:20:34.717406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.576 qpair failed and we were unable to recover it. 00:37:15.576 [2024-07-13 22:20:34.717607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.576 [2024-07-13 22:20:34.717644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.576 qpair failed and we were unable to recover it. 00:37:15.576 [2024-07-13 22:20:34.717938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.576 [2024-07-13 22:20:34.717973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.576 qpair failed and we were unable to recover it. 00:37:15.576 [2024-07-13 22:20:34.718165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.576 [2024-07-13 22:20:34.718199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.576 qpair failed and we were unable to recover it. 00:37:15.576 [2024-07-13 22:20:34.718395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.576 [2024-07-13 22:20:34.718445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.576 qpair failed and we were unable to recover it. 00:37:15.576 [2024-07-13 22:20:34.718684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.576 [2024-07-13 22:20:34.718717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.576 qpair failed and we were unable to recover it. 00:37:15.576 [2024-07-13 22:20:34.718959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.576 [2024-07-13 22:20:34.718998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.576 qpair failed and we were unable to recover it. 00:37:15.576 [2024-07-13 22:20:34.719216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.576 [2024-07-13 22:20:34.719266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.576 qpair failed and we were unable to recover it. 00:37:15.576 [2024-07-13 22:20:34.719477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.576 [2024-07-13 22:20:34.719526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.576 qpair failed and we were unable to recover it. 00:37:15.576 [2024-07-13 22:20:34.719763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.576 [2024-07-13 22:20:34.719799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.576 qpair failed and we were unable to recover it. 00:37:15.576 [2024-07-13 22:20:34.720045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.576 [2024-07-13 22:20:34.720083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.576 qpair failed and we were unable to recover it. 00:37:15.576 [2024-07-13 22:20:34.720315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.576 [2024-07-13 22:20:34.720349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.576 qpair failed and we were unable to recover it. 00:37:15.576 [2024-07-13 22:20:34.720534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.720572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.720779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.720816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.721014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.721049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.721284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.721321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.721526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.721564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.721794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.721831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.722052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.722089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.722310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.722342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.722525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.722557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.722767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.722817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.723040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.723074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.723258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.723290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.723514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.723551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.723780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.723817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.724070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.724105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.724362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.724400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.724631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.724669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.724901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.724935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.725153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.725191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.725406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.725439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.725660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.725693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.725924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.725958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.726196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.726234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.726471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.726503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.726689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.726722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.726983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.727017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.727207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.727240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.727481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.727518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.727748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.727785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.727985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.728019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.577 qpair failed and we were unable to recover it. 00:37:15.577 [2024-07-13 22:20:34.728259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.577 [2024-07-13 22:20:34.728296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.728526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.728563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.728826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.728864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.729196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.729233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.729472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.729509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.729740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.729778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.729980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.730013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.730172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.730205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.730425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.730459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.730680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.730717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.730926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.730964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.731197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.731246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.731501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.731538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.731771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.731809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.732012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.732062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.732270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.732307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.732474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.732508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.732707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.732741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.732960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.732998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.733231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.733268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.733485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.733518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.733729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.733766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.734012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.734045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.734266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.734300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.734548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.734586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.734817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.734850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.735077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.735111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.735354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.735391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.735634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.735672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.578 [2024-07-13 22:20:34.735891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.578 [2024-07-13 22:20:34.735935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.578 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.736145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.736183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.736388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.736425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.736643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.736692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.736954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.736992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.737196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.737232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.737440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.737473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.737731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.737768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.738000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.738038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.738266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.738298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.738488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.738525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.738759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.738796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.739008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.739057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.739285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.739323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.739525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.739573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.739774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.739807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.739994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.740033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.740241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.740278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.740525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.740558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.740752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.740790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.741029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.741068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.741262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.741294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.741512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.741549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.741761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.741798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.742043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.742076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.742250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.742287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.742528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.742565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.742795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.742829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.743052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.743089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.579 [2024-07-13 22:20:34.743297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.579 [2024-07-13 22:20:34.743334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.579 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.743542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.743575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.743796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.743833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.744068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.744106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.744289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.744323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.744562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.744595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.744852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.744899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.745161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.745194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.745403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.745439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.745670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.745706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.745931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.745965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.746125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.746176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.746414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.746452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.746675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.746708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.746967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.747005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.747214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.747246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.747464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.747497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.747728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.747764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.748005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.748042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.748242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.748275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.748485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.748522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.748756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.748789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.749020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.749054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.749214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.749248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.749582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.749619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.749835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.749875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.750091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.750128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.750365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.750402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.750636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.750669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.750882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.750920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.580 [2024-07-13 22:20:34.751130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.580 [2024-07-13 22:20:34.751168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.580 qpair failed and we were unable to recover it. 00:37:15.581 [2024-07-13 22:20:34.751438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.581 [2024-07-13 22:20:34.751469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.581 qpair failed and we were unable to recover it. 00:37:15.581 [2024-07-13 22:20:34.751673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.581 [2024-07-13 22:20:34.751710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.581 qpair failed and we were unable to recover it. 00:37:15.581 [2024-07-13 22:20:34.751945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.581 [2024-07-13 22:20:34.751994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.581 qpair failed and we were unable to recover it. 00:37:15.581 [2024-07-13 22:20:34.752227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.581 [2024-07-13 22:20:34.752261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.581 qpair failed and we were unable to recover it. 00:37:15.581 [2024-07-13 22:20:34.752474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.581 [2024-07-13 22:20:34.752516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.581 qpair failed and we were unable to recover it. 00:37:15.581 [2024-07-13 22:20:34.752756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.581 [2024-07-13 22:20:34.752793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.581 qpair failed and we were unable to recover it. 00:37:15.581 [2024-07-13 22:20:34.752970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.581 [2024-07-13 22:20:34.753010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.581 qpair failed and we were unable to recover it. 00:37:15.581 [2024-07-13 22:20:34.753224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.581 [2024-07-13 22:20:34.753262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.581 qpair failed and we were unable to recover it. 00:37:15.581 [2024-07-13 22:20:34.753499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.581 [2024-07-13 22:20:34.753536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.581 qpair failed and we were unable to recover it. 00:37:15.581 [2024-07-13 22:20:34.753778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.581 [2024-07-13 22:20:34.753810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.581 qpair failed and we were unable to recover it. 00:37:15.581 [2024-07-13 22:20:34.754034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.581 [2024-07-13 22:20:34.754071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.581 qpair failed and we were unable to recover it. 00:37:15.581 [2024-07-13 22:20:34.754314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.581 [2024-07-13 22:20:34.754347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.581 qpair failed and we were unable to recover it. 00:37:15.581 [2024-07-13 22:20:34.754566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.581 [2024-07-13 22:20:34.754600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.581 qpair failed and we were unable to recover it. 00:37:15.581 [2024-07-13 22:20:34.754840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.581 [2024-07-13 22:20:34.754883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.581 qpair failed and we were unable to recover it. 00:37:15.581 [2024-07-13 22:20:34.755121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.581 [2024-07-13 22:20:34.755157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.581 qpair failed and we were unable to recover it. 00:37:15.581 [2024-07-13 22:20:34.755366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.581 [2024-07-13 22:20:34.755399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.581 qpair failed and we were unable to recover it. 00:37:15.581 [2024-07-13 22:20:34.755615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.581 [2024-07-13 22:20:34.755652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.581 qpair failed and we were unable to recover it. 00:37:15.581 [2024-07-13 22:20:34.755853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.581 [2024-07-13 22:20:34.755907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.581 qpair failed and we were unable to recover it. 00:37:15.581 [2024-07-13 22:20:34.756116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.581 [2024-07-13 22:20:34.756150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.581 qpair failed and we were unable to recover it. 00:37:15.581 [2024-07-13 22:20:34.756368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.581 [2024-07-13 22:20:34.756406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.581 qpair failed and we were unable to recover it. 00:37:15.581 [2024-07-13 22:20:34.756610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.581 [2024-07-13 22:20:34.756647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.581 qpair failed and we were unable to recover it. 00:37:15.581 [2024-07-13 22:20:34.756847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.581 [2024-07-13 22:20:34.756887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.581 qpair failed and we were unable to recover it. 00:37:15.581 [2024-07-13 22:20:34.757150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.581 [2024-07-13 22:20:34.757184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.581 qpair failed and we were unable to recover it. 00:37:15.581 [2024-07-13 22:20:34.757505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.581 [2024-07-13 22:20:34.757542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.581 qpair failed and we were unable to recover it. 00:37:15.581 [2024-07-13 22:20:34.757730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.757763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.758016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.758054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.758281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.758318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.758544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.758577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.758800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.758853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.759072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.759108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.759325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.759373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.759606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.759644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.759849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.759900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.760123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.760157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.760377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.760414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.760625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.760662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.760853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.760893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.761176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.761214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.761444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.761477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.761699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.761733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.761948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.761986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.762162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.762199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.762394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.762427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.762685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.762723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.763011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.763046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.763237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.763271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.763508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.763550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.763786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.763824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.764015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.764064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.764318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.764356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.764564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.764603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.764806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.764843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.765056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.765090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.765275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.765314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.765495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.765528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.765800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.765833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.582 qpair failed and we were unable to recover it. 00:37:15.582 [2024-07-13 22:20:34.766076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.582 [2024-07-13 22:20:34.766113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.766290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.766323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.766529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.766577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.766876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.766913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.767132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.767166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.767380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.767418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.767618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.767655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.767878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.767912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.768126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.768162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.768338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.768375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.768603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.768636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.768859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.768903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.769130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.769178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.769407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.769440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.769675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.769712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.769900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.769937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.770198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.770231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.770477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.770514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.770693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.770730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.770933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.770967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.771148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.771186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.771419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.771457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.771665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.771698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.771906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.771944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.772184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.772221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.772443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.772484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.772732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.772769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.772978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.773015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.773228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.773261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.773555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.773592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.773804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.773847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.774102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.774136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.774412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.583 [2024-07-13 22:20:34.774449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.583 qpair failed and we were unable to recover it. 00:37:15.583 [2024-07-13 22:20:34.774682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.774719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.774925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.774959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.775176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.775224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.775451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.775488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.775704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.775738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.775901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.775934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.776173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.776211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.776414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.776448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.776674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.776706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.776905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.776939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.777104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.777139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.777410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.777447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.777650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.777687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.777921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.777955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.778173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.778210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.778397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.778429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.778647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.778680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.778862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.778905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.779114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.779147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.779356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.779387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.779618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.779654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.779855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.779897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.780093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.780126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.780353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.780390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.780624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.780662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.780864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.780902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.781145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.781183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.781389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.781426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.781630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.781678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.781893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.781932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.584 qpair failed and we were unable to recover it. 00:37:15.584 [2024-07-13 22:20:34.782168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.584 [2024-07-13 22:20:34.782206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.782379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.782413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.782629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.782661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.782875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.782915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.783180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.783212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.783455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.783492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.783719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.783756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.783998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.784035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.784312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.784350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.784561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.784598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.784805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.784838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.785009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.785044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.785269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.785308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.785563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.785607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.785800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.785837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.786066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.786122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.786418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.786459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.786751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.786796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.787038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.787076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.787285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.787319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.787544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.787591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.787834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.787877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.788105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.788138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.788379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.788412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.788661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.788698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.788909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.788948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.789147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.789184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.789388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.789421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.789621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.585 [2024-07-13 22:20:34.789658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.585 qpair failed and we were unable to recover it. 00:37:15.585 [2024-07-13 22:20:34.789887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.789924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.790130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.790167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.790360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.790407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.790609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.790660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.790873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.790912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.791191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.791233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.791477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.791516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.791822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.791873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.792156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.792195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.792617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.792680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.792969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.793009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.793322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.793383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.793632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.793675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.793951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.793994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.794240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.794279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.794632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.794710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.794954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.794998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.795270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.795308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.795559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.795603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.795890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.795933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.796189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.796232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.796517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.796555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.796786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.796824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.797119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.797162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.797446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.797485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.797738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.797781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.798052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.798091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.798419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.798471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.798728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.798770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.799009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.586 [2024-07-13 22:20:34.799052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.586 qpair failed and we were unable to recover it. 00:37:15.586 [2024-07-13 22:20:34.799356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.587 [2024-07-13 22:20:34.799394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.587 qpair failed and we were unable to recover it. 00:37:15.587 [2024-07-13 22:20:34.799746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.587 [2024-07-13 22:20:34.799788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.587 qpair failed and we were unable to recover it. 00:37:15.587 [2024-07-13 22:20:34.800048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.587 [2024-07-13 22:20:34.800088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.587 qpair failed and we were unable to recover it. 00:37:15.587 [2024-07-13 22:20:34.800341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.587 [2024-07-13 22:20:34.800384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.587 qpair failed and we were unable to recover it. 00:37:15.587 [2024-07-13 22:20:34.800687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.587 [2024-07-13 22:20:34.800725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.587 qpair failed and we were unable to recover it. 00:37:15.587 [2024-07-13 22:20:34.800979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.587 [2024-07-13 22:20:34.801023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.587 qpair failed and we were unable to recover it. 00:37:15.587 [2024-07-13 22:20:34.801268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.587 [2024-07-13 22:20:34.801311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.587 qpair failed and we were unable to recover it. 00:37:15.587 [2024-07-13 22:20:34.801609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.587 [2024-07-13 22:20:34.801670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.587 qpair failed and we were unable to recover it. 00:37:15.587 [2024-07-13 22:20:34.801940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.587 [2024-07-13 22:20:34.801979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.587 qpair failed and we were unable to recover it. 00:37:15.587 [2024-07-13 22:20:34.802379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.587 [2024-07-13 22:20:34.802442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.587 qpair failed and we were unable to recover it. 00:37:15.587 [2024-07-13 22:20:34.802713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.587 [2024-07-13 22:20:34.802756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.587 qpair failed and we were unable to recover it. 00:37:15.587 [2024-07-13 22:20:34.803022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.587 [2024-07-13 22:20:34.803066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.587 qpair failed and we were unable to recover it. 00:37:15.587 [2024-07-13 22:20:34.803305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.587 [2024-07-13 22:20:34.803343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.587 qpair failed and we were unable to recover it. 00:37:15.587 [2024-07-13 22:20:34.803723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.587 [2024-07-13 22:20:34.803789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.587 qpair failed and we were unable to recover it. 00:37:15.587 [2024-07-13 22:20:34.804078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.587 [2024-07-13 22:20:34.804117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.587 qpair failed and we were unable to recover it. 00:37:15.587 [2024-07-13 22:20:34.804420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.587 [2024-07-13 22:20:34.804464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.587 qpair failed and we were unable to recover it. 00:37:15.587 [2024-07-13 22:20:34.804695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.587 [2024-07-13 22:20:34.804733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.587 qpair failed and we were unable to recover it. 00:37:15.587 [2024-07-13 22:20:34.804991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.587 [2024-07-13 22:20:34.805030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.587 qpair failed and we were unable to recover it. 00:37:15.587 [2024-07-13 22:20:34.805302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.587 [2024-07-13 22:20:34.805345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.587 qpair failed and we were unable to recover it. 00:37:15.587 [2024-07-13 22:20:34.805638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.587 [2024-07-13 22:20:34.805681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.587 qpair failed and we were unable to recover it. 00:37:15.587 [2024-07-13 22:20:34.805972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.587 [2024-07-13 22:20:34.806011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.587 qpair failed and we were unable to recover it. 00:37:15.587 [2024-07-13 22:20:34.806262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.587 [2024-07-13 22:20:34.806305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.587 qpair failed and we were unable to recover it. 00:37:15.587 [2024-07-13 22:20:34.806585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.806624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.806856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.806919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.807112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.807165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.807473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.807535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.807816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.807860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.808137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.808180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.808562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.808637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.808884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.808927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.809179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.809217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.809436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.809473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.809717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.809755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.810003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.810041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.810276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.810313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.810663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.810720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.810952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.810991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.811295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.811357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.811600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.811642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.811908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.811951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.812206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.812243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.812625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.812686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.812966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.813010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.813275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.813318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.813595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.813633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.813908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.813946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.814161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.814204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.814479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.814516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.814739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.814791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.815042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.815081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.815582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.815626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.815891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.815934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.588 qpair failed and we were unable to recover it. 00:37:15.588 [2024-07-13 22:20:34.816198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.588 [2024-07-13 22:20:34.816237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.816555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.816592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.816879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.816922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.817170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.817225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.817533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.817570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.817821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.817864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.818148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.818190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.818429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.818473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.818731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.818768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.819103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.819147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.819396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.819438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.819666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.819709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.819993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.820032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.820432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.820501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.820781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.820823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.821067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.821111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.821358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.821395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.821690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.821732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.822007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.822050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.822318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.822360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.822610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.822647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.822893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.822933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.823172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.823215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.823424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.823467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.823734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.823771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.824053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.824096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.824378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.824420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.824637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.824679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.824926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.824965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.589 qpair failed and we were unable to recover it. 00:37:15.589 [2024-07-13 22:20:34.825276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.589 [2024-07-13 22:20:34.825328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.825586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.825629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.825831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.825881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.826147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.826186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.826507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.826568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.826814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.826862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.827160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.827203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.827474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.827512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.827786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.827829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.828113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.828157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.828541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.828577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.828854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.828918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.829225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.829291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.829538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.829580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.829843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.829900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.830179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.830218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.830584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.830652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.830928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.830971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.831238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.831291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.831582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.831620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.831905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.831949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.832201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.832244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.832525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.832568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.832857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.832900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.833176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.833218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.833465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.833508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.833747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.833791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.834021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.834060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.834402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.834464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.590 qpair failed and we were unable to recover it. 00:37:15.590 [2024-07-13 22:20:34.834710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.590 [2024-07-13 22:20:34.834752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.834996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.835039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.835294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.835331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.835537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.835579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.835823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.835878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.836117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.836159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.836402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.836440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.836760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.836823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.837098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.837140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.837399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.837442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.837736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.837774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.838032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.838075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.838317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.838370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.838606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.838648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.838925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.838964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.839202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.839270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.839545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.839588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.839862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.839910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.840140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.840177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.840455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.840518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.840787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.840830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.841075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.841118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.841329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.841366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.841591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.841629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.841856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.841905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.842240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.842282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.842555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.842593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.842883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.842940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.843169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.843212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.843465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.843507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.843730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.843781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.844077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.844122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.844375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.591 [2024-07-13 22:20:34.844423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.591 qpair failed and we were unable to recover it. 00:37:15.591 [2024-07-13 22:20:34.844697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.844741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.844990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.845028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.845228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.845267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.845488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.845527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.845735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.845786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.846006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.846045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.846271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.846309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.846569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.846607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.846844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.846888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.847127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.847165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.847355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.847408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.847663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.847702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.847954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.847997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.848231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.848270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.848492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.848531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.848783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.848821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.849027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.849066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.849295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.849333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.849551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.849590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.849839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.849891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.850100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.850153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.850460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.850498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.850703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.850742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.850936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.850974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.851164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.851202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.851441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.851480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.851712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.851750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.851963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.852003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.852222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.852261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.852523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.852562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.852808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.852847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.853103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.853145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.853368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.853411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.592 [2024-07-13 22:20:34.853664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.592 [2024-07-13 22:20:34.853702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.592 qpair failed and we were unable to recover it. 00:37:15.593 [2024-07-13 22:20:34.853912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.593 [2024-07-13 22:20:34.853951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.593 qpair failed and we were unable to recover it. 00:37:15.593 [2024-07-13 22:20:34.854205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.593 [2024-07-13 22:20:34.854244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.593 qpair failed and we were unable to recover it. 00:37:15.593 [2024-07-13 22:20:34.854514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.593 [2024-07-13 22:20:34.854556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.593 qpair failed and we were unable to recover it. 00:37:15.593 [2024-07-13 22:20:34.854797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.593 [2024-07-13 22:20:34.854836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.593 qpair failed and we were unable to recover it. 00:37:15.593 [2024-07-13 22:20:34.855104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.593 [2024-07-13 22:20:34.855143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.593 qpair failed and we were unable to recover it. 00:37:15.593 [2024-07-13 22:20:34.855382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.593 [2024-07-13 22:20:34.855428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.593 qpair failed and we were unable to recover it. 00:37:15.593 [2024-07-13 22:20:34.855624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.593 [2024-07-13 22:20:34.855672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.593 qpair failed and we were unable to recover it. 00:37:15.593 [2024-07-13 22:20:34.855908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.593 [2024-07-13 22:20:34.855948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.593 qpair failed and we were unable to recover it. 00:37:15.593 [2024-07-13 22:20:34.856190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.593 [2024-07-13 22:20:34.856247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.593 qpair failed and we were unable to recover it. 00:37:15.593 [2024-07-13 22:20:34.856494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.593 [2024-07-13 22:20:34.856537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.593 qpair failed and we were unable to recover it. 00:37:15.593 [2024-07-13 22:20:34.856801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.593 [2024-07-13 22:20:34.856844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.593 qpair failed and we were unable to recover it. 00:37:15.593 [2024-07-13 22:20:34.857078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.593 [2024-07-13 22:20:34.857116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.593 qpair failed and we were unable to recover it. 00:37:15.593 [2024-07-13 22:20:34.857337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.593 [2024-07-13 22:20:34.857376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.593 qpair failed and we were unable to recover it. 00:37:15.593 [2024-07-13 22:20:34.857630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.593 [2024-07-13 22:20:34.857668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.593 qpair failed and we were unable to recover it. 00:37:15.593 [2024-07-13 22:20:34.857862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.593 [2024-07-13 22:20:34.857909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.593 qpair failed and we were unable to recover it. 00:37:15.593 [2024-07-13 22:20:34.858167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.593 [2024-07-13 22:20:34.858206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.593 qpair failed and we were unable to recover it. 00:37:15.593 [2024-07-13 22:20:34.858496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.593 [2024-07-13 22:20:34.858538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.593 qpair failed and we were unable to recover it. 00:37:15.593 [2024-07-13 22:20:34.858785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.593 [2024-07-13 22:20:34.858828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.593 qpair failed and we were unable to recover it. 00:37:15.593 [2024-07-13 22:20:34.859079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.593 [2024-07-13 22:20:34.859121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.593 qpair failed and we were unable to recover it. 00:37:15.593 [2024-07-13 22:20:34.859389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.593 [2024-07-13 22:20:34.859428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.593 qpair failed and we were unable to recover it. 00:37:15.593 [2024-07-13 22:20:34.859649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.593 [2024-07-13 22:20:34.859687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.593 qpair failed and we were unable to recover it. 00:37:15.593 [2024-07-13 22:20:34.859921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.593 [2024-07-13 22:20:34.859960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.593 qpair failed and we were unable to recover it. 00:37:15.593 [2024-07-13 22:20:34.860161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.593 [2024-07-13 22:20:34.860200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.593 qpair failed and we were unable to recover it. 00:37:15.593 [2024-07-13 22:20:34.860415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.593 [2024-07-13 22:20:34.860454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.593 qpair failed and we were unable to recover it. 00:37:15.593 [2024-07-13 22:20:34.860675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.860714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.860958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.861015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.861256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.861306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.861601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.861639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.861886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.861925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.862129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.862179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.862391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.862430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.862660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.862698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.862914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.862953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.863174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.863212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.863428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.863466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.863707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.863747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.863954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.863995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.864211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.864258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.864480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.864524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.864754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.864792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.865023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.865062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.865291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.865330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.865523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.865562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.865833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.865889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.866118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.866156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.866390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.866429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.866646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.866685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.866938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.866977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.867166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.867232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.867497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.867540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.867793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.867841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.868098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.868138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.868373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.868412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.868638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.868677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.868909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.868948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.869147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.594 [2024-07-13 22:20:34.869186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.594 qpair failed and we were unable to recover it. 00:37:15.594 [2024-07-13 22:20:34.869385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.869423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.869648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.869687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.869915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.869956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.870170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.870211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.870743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.870788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.871073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.871122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.871399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.871449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.871676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.871719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.871952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.871990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.872200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.872234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.872425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.872458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.872627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.872662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.872873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.872907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.873109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.873142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.873325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.873358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.873522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.873554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.873741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.873774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.873960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.873995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.874167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.874201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.874377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.874410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.874589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.874622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.874822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.874855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.875043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.875082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.875253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.875286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.875442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.875476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.875673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.875706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.875901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.875935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.876119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.876152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.876347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.876381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.876586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.876619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.876785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.876817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.595 qpair failed and we were unable to recover it. 00:37:15.595 [2024-07-13 22:20:34.876983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.595 [2024-07-13 22:20:34.877017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.877180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.877221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.877386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.877419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.877609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.877641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.877803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.877836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.878045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.878079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.878268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.878301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.878460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.878493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.879041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.879080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.879288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.879323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.879541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.879580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.879746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.879779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.879952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.879986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.880177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.880210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.880410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.880443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.880609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.880642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.880822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.880860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.881036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.881069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.881251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.881285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.881450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.881484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.881699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.881732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.881921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.881953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.882129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.882162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.882359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.882392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.882602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.882635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.882818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.882851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.883065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.883098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.883318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.883360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.883530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.883564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.883754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.883788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.883970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.884004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.884215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.596 [2024-07-13 22:20:34.884252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.596 qpair failed and we were unable to recover it. 00:37:15.596 [2024-07-13 22:20:34.884455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.597 [2024-07-13 22:20:34.884487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.597 qpair failed and we were unable to recover it. 00:37:15.597 [2024-07-13 22:20:34.884676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.597 [2024-07-13 22:20:34.884709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.597 qpair failed and we were unable to recover it. 00:37:15.597 [2024-07-13 22:20:34.884879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.597 [2024-07-13 22:20:34.884914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.597 qpair failed and we were unable to recover it. 00:37:15.597 [2024-07-13 22:20:34.885133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.597 [2024-07-13 22:20:34.885167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.597 qpair failed and we were unable to recover it. 00:37:15.597 [2024-07-13 22:20:34.885381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.597 [2024-07-13 22:20:34.885413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.597 qpair failed and we were unable to recover it. 00:37:15.597 [2024-07-13 22:20:34.885600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.597 [2024-07-13 22:20:34.885634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.597 qpair failed and we were unable to recover it. 00:37:15.597 [2024-07-13 22:20:34.885835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.597 [2024-07-13 22:20:34.885878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.597 qpair failed and we were unable to recover it. 00:37:15.597 [2024-07-13 22:20:34.886041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.597 [2024-07-13 22:20:34.886073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.597 qpair failed and we were unable to recover it. 00:37:15.597 [2024-07-13 22:20:34.886269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.597 [2024-07-13 22:20:34.886302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.597 qpair failed and we were unable to recover it. 00:37:15.597 [2024-07-13 22:20:34.886471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.597 [2024-07-13 22:20:34.886505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.597 qpair failed and we were unable to recover it. 00:37:15.597 [2024-07-13 22:20:34.886732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.597 [2024-07-13 22:20:34.886765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.597 qpair failed and we were unable to recover it. 00:37:15.597 [2024-07-13 22:20:34.887001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.597 [2024-07-13 22:20:34.887035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.597 qpair failed and we were unable to recover it. 00:37:15.597 [2024-07-13 22:20:34.887218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.597 [2024-07-13 22:20:34.887259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.597 qpair failed and we were unable to recover it. 00:37:15.597 [2024-07-13 22:20:34.887455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.597 [2024-07-13 22:20:34.887489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.597 qpair failed and we were unable to recover it. 00:37:15.597 [2024-07-13 22:20:34.887664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.597 [2024-07-13 22:20:34.887697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.597 qpair failed and we were unable to recover it. 00:37:15.597 [2024-07-13 22:20:34.887890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.597 [2024-07-13 22:20:34.887924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.597 qpair failed and we were unable to recover it. 00:37:15.597 [2024-07-13 22:20:34.888086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.597 [2024-07-13 22:20:34.888119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.597 qpair failed and we were unable to recover it. 00:37:15.597 [2024-07-13 22:20:34.888334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.597 [2024-07-13 22:20:34.888366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.597 qpair failed and we were unable to recover it. 00:37:15.597 [2024-07-13 22:20:34.888533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.597 [2024-07-13 22:20:34.888566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.597 qpair failed and we were unable to recover it. 00:37:15.597 [2024-07-13 22:20:34.888749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.597 [2024-07-13 22:20:34.888782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.597 qpair failed and we were unable to recover it. 00:37:15.597 [2024-07-13 22:20:34.888967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.597 [2024-07-13 22:20:34.889002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.597 qpair failed and we were unable to recover it. 00:37:15.597 [2024-07-13 22:20:34.889191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.597 [2024-07-13 22:20:34.889230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.597 qpair failed and we were unable to recover it. 00:37:15.597 [2024-07-13 22:20:34.889407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.597 [2024-07-13 22:20:34.889440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.597 qpair failed and we were unable to recover it. 00:37:15.597 [2024-07-13 22:20:34.889628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.597 [2024-07-13 22:20:34.889661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.597 qpair failed and we were unable to recover it. 00:37:15.597 [2024-07-13 22:20:34.889848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.597 [2024-07-13 22:20:34.889892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.597 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.890052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.890085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.890269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.890307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.890493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.890526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.890681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.890714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.890880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.890913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.891104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.891137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.891331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.891364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.891565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.891597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.891788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.891820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.892002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.892036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.892215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.892248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.892438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.892471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.892663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.892697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.892885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.892919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.893084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.893122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.893306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.893338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.893516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.893549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.893708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.893741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.893911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.893944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.894156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.894188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.894347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.894384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.894604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.894637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.894799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.894831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.895016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.895048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.895210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.895242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.895418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.895452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.895606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.895639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.895818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.895850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.896033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.896066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.896267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.896299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.598 qpair failed and we were unable to recover it. 00:37:15.598 [2024-07-13 22:20:34.896457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.598 [2024-07-13 22:20:34.896489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.599 qpair failed and we were unable to recover it. 00:37:15.599 [2024-07-13 22:20:34.896648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.599 [2024-07-13 22:20:34.896680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.599 qpair failed and we were unable to recover it. 00:37:15.599 [2024-07-13 22:20:34.896836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.599 [2024-07-13 22:20:34.896878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.599 qpair failed and we were unable to recover it. 00:37:15.599 [2024-07-13 22:20:34.897074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.599 [2024-07-13 22:20:34.897108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.599 qpair failed and we were unable to recover it. 00:37:15.599 [2024-07-13 22:20:34.897300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.599 [2024-07-13 22:20:34.897342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.599 qpair failed and we were unable to recover it. 00:37:15.599 [2024-07-13 22:20:34.897507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.599 [2024-07-13 22:20:34.897541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.599 qpair failed and we were unable to recover it. 00:37:15.599 [2024-07-13 22:20:34.897745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.599 [2024-07-13 22:20:34.897778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.599 qpair failed and we were unable to recover it. 00:37:15.599 [2024-07-13 22:20:34.897944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.599 [2024-07-13 22:20:34.897978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.599 qpair failed and we were unable to recover it. 00:37:15.599 [2024-07-13 22:20:34.898172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.599 [2024-07-13 22:20:34.898205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.599 qpair failed and we were unable to recover it. 00:37:15.599 [2024-07-13 22:20:34.898367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.599 [2024-07-13 22:20:34.898400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.599 qpair failed and we were unable to recover it. 00:37:15.599 [2024-07-13 22:20:34.898591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.599 [2024-07-13 22:20:34.898624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.599 qpair failed and we were unable to recover it. 00:37:15.599 [2024-07-13 22:20:34.898790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.599 [2024-07-13 22:20:34.898825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.599 qpair failed and we were unable to recover it. 00:37:15.599 [2024-07-13 22:20:34.899007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.599 [2024-07-13 22:20:34.899040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.599 qpair failed and we were unable to recover it. 00:37:15.599 [2024-07-13 22:20:34.899224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.599 [2024-07-13 22:20:34.899256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.599 qpair failed and we were unable to recover it. 00:37:15.599 [2024-07-13 22:20:34.899442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.599 [2024-07-13 22:20:34.899474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.599 qpair failed and we were unable to recover it. 00:37:15.599 [2024-07-13 22:20:34.899636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.599 [2024-07-13 22:20:34.899669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.599 qpair failed and we were unable to recover it. 00:37:15.599 [2024-07-13 22:20:34.899863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.599 [2024-07-13 22:20:34.899901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.599 qpair failed and we were unable to recover it. 00:37:15.599 [2024-07-13 22:20:34.900086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.599 [2024-07-13 22:20:34.900119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.599 qpair failed and we were unable to recover it. 00:37:15.599 [2024-07-13 22:20:34.900282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.599 [2024-07-13 22:20:34.900314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.599 qpair failed and we were unable to recover it. 00:37:15.599 [2024-07-13 22:20:34.900531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.599 [2024-07-13 22:20:34.900564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.599 qpair failed and we were unable to recover it. 00:37:15.599 [2024-07-13 22:20:34.900723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.599 [2024-07-13 22:20:34.900757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.599 qpair failed and we were unable to recover it. 00:37:15.599 [2024-07-13 22:20:34.900920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.599 [2024-07-13 22:20:34.900953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.599 qpair failed and we were unable to recover it. 00:37:15.599 [2024-07-13 22:20:34.901178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.599 [2024-07-13 22:20:34.901211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.599 qpair failed and we were unable to recover it. 00:37:15.599 [2024-07-13 22:20:34.901403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.599 [2024-07-13 22:20:34.901447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.599 qpair failed and we were unable to recover it. 00:37:15.599 [2024-07-13 22:20:34.901663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.599 [2024-07-13 22:20:34.901700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.599 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.901887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.901921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.902084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.902116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.902341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.902374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.902557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.902590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.902756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.902789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.902990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.903023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.903232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.903265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.903426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.903460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.903639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.903672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.903862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.903907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.904100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.904133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.904336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.904369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.904526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.904558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.904742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.904775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.904976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.905009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.905177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.905209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.905401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.905434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.905627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.905659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.905830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.905876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.906039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.906073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.906227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.906259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.906480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.906514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.906690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.906723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.906880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.906913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.907080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.907113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.907285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.907317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.907492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.907525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.907712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.907753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.907947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.907980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.908137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.908170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.600 qpair failed and we were unable to recover it. 00:37:15.600 [2024-07-13 22:20:34.908375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.600 [2024-07-13 22:20:34.908408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.908573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.908605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.908763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.908797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.908966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.909000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.909172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.909207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.909398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.909431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.909597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.909630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.909823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.909856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.910031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.910065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.910259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.910296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.910484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.910516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.910700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.910733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.910920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.910955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.911120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.911165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.911345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.911378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.911542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.911577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.911746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.911780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.911935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.911969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.912165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.912198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.912413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.912446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.912625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.912659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.912848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.912887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.913094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.913127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.913301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.913334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.913502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.913535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.913750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.913782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.913970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.914004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.914194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.914235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.914435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.914468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.914639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.914672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.914890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.914933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.915088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.915120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.601 qpair failed and we were unable to recover it. 00:37:15.601 [2024-07-13 22:20:34.915293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.601 [2024-07-13 22:20:34.915326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.915486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.915519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.915676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.915709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.915896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.915935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.916130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.916190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.916423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.916461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.916650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.916684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.916884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.916919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.917088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.917122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.917335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.917368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.917561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.917593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.917755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.917786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.917973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.918005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.918240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.918277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.918515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.918548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.918741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.918777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.918994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.919027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.919181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.919219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.919415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.919448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.919696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.919729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.919923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.919957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.920186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.920223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.920420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.920476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.920683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.920716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.920927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.920960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.921121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.921171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.921360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.921393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.921609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.921672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.921853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.921899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.922082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.922132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.922366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.922398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.922686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.922742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.922974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.602 [2024-07-13 22:20:34.923007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.602 qpair failed and we were unable to recover it. 00:37:15.602 [2024-07-13 22:20:34.923236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.923273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.923451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.923484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.923674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.923725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.923951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.923985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.924147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.924184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.924373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.924406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.924574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.924606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.924797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.924828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.925061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.925098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.925309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.925341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.925644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.925709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.925929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.925963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.926128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.926179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.926388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.926421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.926628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.926660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.926878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.926939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.927104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.927136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.927296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.927327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.927555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.927591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.927822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.927858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.928062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.928095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.928305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.928346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.928640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.928673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.928893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.928944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.929129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.929182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.929387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.929419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.929634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.929670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.929925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.929958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.930125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.930158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.603 [2024-07-13 22:20:34.930383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.603 [2024-07-13 22:20:34.930415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.603 qpair failed and we were unable to recover it. 00:37:15.604 [2024-07-13 22:20:34.930630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.604 [2024-07-13 22:20:34.930666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.604 qpair failed and we were unable to recover it. 00:37:15.604 [2024-07-13 22:20:34.930864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.604 [2024-07-13 22:20:34.930907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.604 qpair failed and we were unable to recover it. 00:37:15.604 [2024-07-13 22:20:34.931111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.604 [2024-07-13 22:20:34.931143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.604 qpair failed and we were unable to recover it. 00:37:15.604 [2024-07-13 22:20:34.931362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.604 [2024-07-13 22:20:34.931394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.604 qpair failed and we were unable to recover it. 00:37:15.604 [2024-07-13 22:20:34.931655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.604 [2024-07-13 22:20:34.931708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.604 qpair failed and we were unable to recover it. 00:37:15.604 [2024-07-13 22:20:34.931961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.604 [2024-07-13 22:20:34.932004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.604 qpair failed and we were unable to recover it. 00:37:15.604 [2024-07-13 22:20:34.932186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.604 [2024-07-13 22:20:34.932223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.604 qpair failed and we were unable to recover it. 00:37:15.604 [2024-07-13 22:20:34.932461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.604 [2024-07-13 22:20:34.932494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.604 qpair failed and we were unable to recover it. 00:37:15.604 [2024-07-13 22:20:34.932723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.604 [2024-07-13 22:20:34.932760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.604 qpair failed and we were unable to recover it. 00:37:15.604 [2024-07-13 22:20:34.932998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.604 [2024-07-13 22:20:34.933031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.604 qpair failed and we were unable to recover it. 00:37:15.604 [2024-07-13 22:20:34.933210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.604 [2024-07-13 22:20:34.933242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.604 qpair failed and we were unable to recover it. 00:37:15.604 [2024-07-13 22:20:34.933428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.604 [2024-07-13 22:20:34.933459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.604 qpair failed and we were unable to recover it. 00:37:15.604 [2024-07-13 22:20:34.933783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.604 [2024-07-13 22:20:34.933841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.604 qpair failed and we were unable to recover it. 00:37:15.604 [2024-07-13 22:20:34.934031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.604 [2024-07-13 22:20:34.934064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.604 qpair failed and we were unable to recover it. 00:37:15.604 [2024-07-13 22:20:34.934274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.604 [2024-07-13 22:20:34.934310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.604 qpair failed and we were unable to recover it. 00:37:15.604 [2024-07-13 22:20:34.934528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.604 [2024-07-13 22:20:34.934561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.604 qpair failed and we were unable to recover it. 00:37:15.604 [2024-07-13 22:20:34.934774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.604 [2024-07-13 22:20:34.934810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.604 qpair failed and we were unable to recover it. 00:37:15.604 [2024-07-13 22:20:34.935004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.604 [2024-07-13 22:20:34.935036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.604 qpair failed and we were unable to recover it. 00:37:15.604 [2024-07-13 22:20:34.935256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.604 [2024-07-13 22:20:34.935293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.604 qpair failed and we were unable to recover it. 00:37:15.604 [2024-07-13 22:20:34.935511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.604 [2024-07-13 22:20:34.935544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.604 qpair failed and we were unable to recover it. 00:37:15.604 [2024-07-13 22:20:34.935790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.604 [2024-07-13 22:20:34.935826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.604 qpair failed and we were unable to recover it. 00:37:15.604 [2024-07-13 22:20:34.936075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.604 [2024-07-13 22:20:34.936108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.604 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.936350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.936386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.936620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.936652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.936817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.936850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.937025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.937058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.937268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.937318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.937525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.937558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.937791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.937828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.938035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.938068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.938278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.938325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.938532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.938564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.938813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.938845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.939060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.939111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.939345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.939381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.939621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.939653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.939863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.939914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.940099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.940136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.940373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.940405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.940618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.940650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.940877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.940922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.941115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.941147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.941371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.941404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.941605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.941662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.941887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.941934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.942152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.942192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.942412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.942448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.942630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.942662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.942892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.942931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.943167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.943204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.943439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.943471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.943672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.943704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.944003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.944037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.944281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.944334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.944570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.944606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.605 [2024-07-13 22:20:34.944801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.605 [2024-07-13 22:20:34.944835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.605 qpair failed and we were unable to recover it. 00:37:15.606 [2024-07-13 22:20:34.945057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.606 [2024-07-13 22:20:34.945090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.606 qpair failed and we were unable to recover it. 00:37:15.606 [2024-07-13 22:20:34.945349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.606 [2024-07-13 22:20:34.945386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.606 qpair failed and we were unable to recover it. 00:37:15.606 [2024-07-13 22:20:34.945627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.606 [2024-07-13 22:20:34.945660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.606 qpair failed and we were unable to recover it. 00:37:15.606 [2024-07-13 22:20:34.945889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.606 [2024-07-13 22:20:34.945939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.606 qpair failed and we were unable to recover it. 00:37:15.606 [2024-07-13 22:20:34.946123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.606 [2024-07-13 22:20:34.946172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.606 qpair failed and we were unable to recover it. 00:37:15.606 [2024-07-13 22:20:34.946447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.606 [2024-07-13 22:20:34.946484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.606 qpair failed and we were unable to recover it. 00:37:15.884 [2024-07-13 22:20:34.946718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.884 [2024-07-13 22:20:34.946756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.884 qpair failed and we were unable to recover it. 00:37:15.884 [2024-07-13 22:20:34.946979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.884 [2024-07-13 22:20:34.947013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.884 qpair failed and we were unable to recover it. 00:37:15.884 [2024-07-13 22:20:34.947219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.884 [2024-07-13 22:20:34.947255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.884 qpair failed and we were unable to recover it. 00:37:15.884 [2024-07-13 22:20:34.947471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.884 [2024-07-13 22:20:34.947508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.884 qpair failed and we were unable to recover it. 00:37:15.884 [2024-07-13 22:20:34.947710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.884 [2024-07-13 22:20:34.947746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.884 qpair failed and we were unable to recover it. 00:37:15.884 [2024-07-13 22:20:34.947956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.884 [2024-07-13 22:20:34.947990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.884 qpair failed and we were unable to recover it. 00:37:15.884 [2024-07-13 22:20:34.948191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.884 [2024-07-13 22:20:34.948227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.884 qpair failed and we were unable to recover it. 00:37:15.884 [2024-07-13 22:20:34.948510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.884 [2024-07-13 22:20:34.948567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.884 qpair failed and we were unable to recover it. 00:37:15.884 [2024-07-13 22:20:34.948765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.884 [2024-07-13 22:20:34.948801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.884 qpair failed and we were unable to recover it. 00:37:15.884 [2024-07-13 22:20:34.949064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.884 [2024-07-13 22:20:34.949097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.884 qpair failed and we were unable to recover it. 00:37:15.884 [2024-07-13 22:20:34.949289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.884 [2024-07-13 22:20:34.949325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.884 qpair failed and we were unable to recover it. 00:37:15.884 [2024-07-13 22:20:34.949537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.884 [2024-07-13 22:20:34.949571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.884 qpair failed and we were unable to recover it. 00:37:15.884 [2024-07-13 22:20:34.949760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.884 [2024-07-13 22:20:34.949794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.884 qpair failed and we were unable to recover it. 00:37:15.884 [2024-07-13 22:20:34.950001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.884 [2024-07-13 22:20:34.950034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.884 qpair failed and we were unable to recover it. 00:37:15.884 [2024-07-13 22:20:34.950199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.950232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.950489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.950544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.950733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.950766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.950983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.951017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.951204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.951242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.951511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.951547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.951752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.951788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.951987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.952021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.952228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.952271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.952661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.952739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.952979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.953012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.953224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.953261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.953482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.953537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.953778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.953814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.954045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.954078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.954321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.954359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.954613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.954669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.954903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.954937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.955126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.955186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.955379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.955415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.955740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.955804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.956015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.956048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.956272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.956305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.956495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.956528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.956711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.956744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.956953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.956992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.957149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.957183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.957423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.957459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.957690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.957727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.957937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.957971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.958176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.958212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.958428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.958462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.958652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.958685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.958899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.958933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.959120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.959153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.959341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.959373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.959558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.959598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.959792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.959825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.960037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.960071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.960289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.885 [2024-07-13 22:20:34.960322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.885 qpair failed and we were unable to recover it. 00:37:15.885 [2024-07-13 22:20:34.960509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.960543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.960708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.960741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.960925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.960959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.961146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.961179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.961398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.961430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.961637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.961670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.961854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.961895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.962083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.962115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.962301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.962333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.962528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.962560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.962774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.962806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.962989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.963022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.963242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.963275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.963469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.963502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.963691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.963724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.963925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.963959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.964116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.964149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.964314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.964347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.964568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.964601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.964826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.964859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.965058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.965092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.965267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.965305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.965520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.965553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.965743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.965776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.965994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.966027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.966210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.966247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.966429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.966469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.966654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.966686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.966844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.966882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.967080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.967113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.967324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.967357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.967515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.967549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.967711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.967743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.967941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.967975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.968171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.968204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.968418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.968451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.968644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.968676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.968842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.968883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.969034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.969066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.969233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.886 [2024-07-13 22:20:34.969266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.886 qpair failed and we were unable to recover it. 00:37:15.886 [2024-07-13 22:20:34.969431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.969464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.969639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.969672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.969862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.969942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.970105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.970137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.970333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.970367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.970570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.970602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.970786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.970819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.971030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.971063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.971282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.971316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.971517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.971551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.971772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.971805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.971974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.972007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.972173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.972206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.972375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.972408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.972597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.972630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.972844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.972882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.973071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.973103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.973334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.973366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.973549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.973582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.973767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.973807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.974001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.974034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.974221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.974254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.974424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.974458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.974672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.974706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.974897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.974943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.975139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.975176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.975364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.975398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.975586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.975619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.975784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.975817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.976025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.976059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.976254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.976287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.976478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.976511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.976724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.976757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.976956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.976990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.977174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.977207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.977398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.977430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.977622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.977653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.977839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.977878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.978073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.887 [2024-07-13 22:20:34.978106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.887 qpair failed and we were unable to recover it. 00:37:15.887 [2024-07-13 22:20:34.978309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.978342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.978551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.978584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.978744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.978777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.978966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.978999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.979177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.979210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.979368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.979402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.979591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.979624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.979838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.979877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.980079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.980112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.980317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.980349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.980569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.980601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.980757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.980791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.980970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.981003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.981199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.981235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.981402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.981436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.981630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.981662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.981846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.981885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.982074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.982107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.982337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.982371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.982557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.982591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.982756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.982790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.983009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.983043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.983247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.983280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.983474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.983507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.983693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.983726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.983904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.983943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.984125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.984165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.984355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.984398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.984606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.984639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.984825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.984858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.985043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.985083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.985275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.985308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.985493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.985527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.985714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.985747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.985915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.985954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.888 [2024-07-13 22:20:34.986139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.888 [2024-07-13 22:20:34.986172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.888 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.986360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.986393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.986583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.986615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.986809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.986842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.987042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.987075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.987289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.987322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.987481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.987513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.987684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.987717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.987904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.987936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.988140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.988172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.988328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.988360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.988551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.988584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.988779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.988813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.989026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.989059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.989215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.989248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.989443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.989477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.989667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.989700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.989883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.989917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.990086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.990119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.990290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.990323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.990482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.990513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.990675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.990710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.990937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.990970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.991145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.991178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.991339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.991372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.991568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.991600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.991788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.991820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.991984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.992018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.992218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.992251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.992404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.992437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.992623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.992655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.992821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.992858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.993091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.993124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.993319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.993352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.993538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.993570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.993760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.993792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.889 qpair failed and we were unable to recover it. 00:37:15.889 [2024-07-13 22:20:34.993977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.889 [2024-07-13 22:20:34.994016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:34.994173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:34.994206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:34.994425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:34.994457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:34.994626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:34.994658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:34.994839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:34.994875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:34.995038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:34.995070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:34.995280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:34.995312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:34.995528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:34.995561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:34.995748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:34.995781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:34.995975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:34.996013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:34.996205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:34.996238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:34.996441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:34.996475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:34.996666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:34.996699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:34.996892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:34.996933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:34.997154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:34.997186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:34.997389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:34.997423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:34.997611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:34.997643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:34.997826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:34.997857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:34.998103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:34.998134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:34.998332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:34.998365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:34.998532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:34.998575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:34.998764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:34.998796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:34.998990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:34.999027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:34.999193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:34.999225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:34.999419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:34.999452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:34.999609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:34.999642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:34.999828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:34.999861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:35.000114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:35.000148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:35.000332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:35.000365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:35.000552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:35.000585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:35.000737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:35.000770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:35.000943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:35.000976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:35.001168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:35.001201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:35.001363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:35.001396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:35.001607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:35.001639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:35.001828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:35.001871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:35.002112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:35.002145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:35.002302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:35.002334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:35.002496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:35.002528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:35.002698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:35.002731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.890 qpair failed and we were unable to recover it. 00:37:15.890 [2024-07-13 22:20:35.002927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.890 [2024-07-13 22:20:35.002961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.003163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.003196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.003380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.003412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.003611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.003644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.003830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.003863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.004041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.004074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.004268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.004300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.004522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.004556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.004770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.004802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.005010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.005044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.005233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.005266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.005429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.005462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.005652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.005685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.005896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.005935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.006122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.006155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.006345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.006377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.006545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.006577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.006763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.006796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.006962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.006995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.007218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.007251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.007433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.007466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.007664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.007696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.007915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.007949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.008142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.008175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.008365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.008399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.008567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.008601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.008773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.008807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.009004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.009038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.009252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.009285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.009473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.009506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.009697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.009730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.009897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.009939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.010106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.010139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.010352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.010385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.010549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.010581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.010744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.010781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.010971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.011012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.011195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.011229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.011414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.011447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.011642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.891 [2024-07-13 22:20:35.011675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.891 qpair failed and we were unable to recover it. 00:37:15.891 [2024-07-13 22:20:35.011893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.011935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.012151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.012184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.012367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.012401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.012628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.012661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.012849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.012898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.013113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.013145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.013333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.013366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.013556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.013589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.013774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.013806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.014024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.014063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.014255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.014291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.014448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.014481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.014643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.014675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.014863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.014920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.015134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.015176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.015338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.015371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.015563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.015596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.015783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.015815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.016020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.016053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.016247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.016280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.016470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.016502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.016655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.016688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.016879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.016920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.017117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.017150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.017344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.017377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.017566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.017599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.017824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.017856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.018065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.018098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.018290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.018322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.018511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.018544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.018737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.018770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.019001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.019038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.019260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.019293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.019506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.019539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.019706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.019739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.019923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.019960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.020157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.020189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.020376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.020409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.020595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.020627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.020823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.020855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.892 [2024-07-13 22:20:35.021084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.892 [2024-07-13 22:20:35.021117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.892 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.021305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.021339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.021548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.021581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.021804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.021837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.022045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.022078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.022248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.022281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.022448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.022480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.022650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.022682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.022846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.022888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.023062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.023095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.023279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.023312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.023501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.023533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.023726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.023758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.023925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.023958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.024179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.024212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.024413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.024446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.024602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.024636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.024823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.024856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.025061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.025095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.025313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.025346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.025537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.025569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.025751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.025783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.025983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.026017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.026213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.026247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.026438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.026471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.026658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.026690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.026917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.026949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.027124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.027156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.027342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.027385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.027598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.027631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.027823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.027855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.028104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.028137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.028338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.028371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.028596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.028628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.893 qpair failed and we were unable to recover it. 00:37:15.893 [2024-07-13 22:20:35.028849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.893 [2024-07-13 22:20:35.028890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.029067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.029106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.029299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.029332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.029545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.029577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.029738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.029771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.029970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.030004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.030169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.030201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.030413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.030446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.030645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.030678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.030877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.030916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.031130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.031163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.031350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.031383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.031598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.031630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.031847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.031886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.032083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.032117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.032335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.032367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.032556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.032589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.032784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.032817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.033026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.033059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.033226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.033258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.033469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.033502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.033697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.033731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.033898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.033938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.034102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.034134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.034322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.034354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.034546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.034579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.034744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.034778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.034960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.034994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.035208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.035241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.035434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.035468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.035632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.035666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.035829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.035862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.036082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.036117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.036314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.036348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.036533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.036566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.036780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.036813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.037023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.037056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.037275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.037309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.037495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.037528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.037687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.037719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.894 [2024-07-13 22:20:35.037939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.894 [2024-07-13 22:20:35.037973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.894 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.038168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.038205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.038368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.038400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.038593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.038626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.038813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.038846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.039078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.039111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.039322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.039355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.039544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.039576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.039738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.039770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.039934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.039967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.040186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.040219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.040402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.040434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.040617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.040650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.040827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.040860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.041095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.041128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.041323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.041355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.041542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.041574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.041825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.041858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.042034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.042076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.042258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.042292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.042449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.042482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.042650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.042682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.042894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.042928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.043116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.043155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.043340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.043373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.043562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.043595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.043816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.043849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.044040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.044073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.044265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.044297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.044488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.044520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.044710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.044741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.044933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.044966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.045212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.045245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.045434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.045467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.045678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.045711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.045879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.045912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.046101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.046132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.046356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.046389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.046579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.046611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.046795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.046827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.046993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.047027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.895 qpair failed and we were unable to recover it. 00:37:15.895 [2024-07-13 22:20:35.047214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.895 [2024-07-13 22:20:35.047252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.047439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.047472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.047679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.047712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.047896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.047929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.048101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.048133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.048294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.048326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.048540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.048573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.048788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.048820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.049054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.049087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.049272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.049304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.049460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.049492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.049649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.049680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.049897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.049930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.050119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.050152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.050344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.050377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.050543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.050576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.050746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.050778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.050936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.050969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.051134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.051167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.051354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.051386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.051555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.051588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.051781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.051813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.051983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.052017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.052231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.052263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.052432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.052465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.052677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.052710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.052900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.052933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.053149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.053181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.053393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.053425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.053610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.053643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.053837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.053876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.054044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.054077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.054263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.054296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.054477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.054509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.054692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.054724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.054906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.054938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.055132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.055165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.055349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.055382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.055593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.055626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.055787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.055819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.896 [2024-07-13 22:20:35.056027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.896 [2024-07-13 22:20:35.056061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.896 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.056296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.056338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.056525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.056557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.056743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.056774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.057015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.057048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.057273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.057306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.057521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.057554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.057735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.057767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.057957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.057991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.058174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.058206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.058387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.058419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.058632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.058664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.058882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.058915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.059076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.059109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.059322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.059355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.059521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.059554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.059747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.059780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.059967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.059999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.060164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.060196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.060405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.060438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.060658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.060692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.060895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.060928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.061151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.061184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.061392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.061424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.061615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.061648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.061872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.061905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.062095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.062127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.062340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.062376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.062575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.062608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.062793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.062826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.063043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.063076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.063265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.063298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.063466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.063499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.063711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.063743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.063931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.897 [2024-07-13 22:20:35.063964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.897 qpair failed and we were unable to recover it. 00:37:15.897 [2024-07-13 22:20:35.064132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.064166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.064383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.064416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.064575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.064609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.064799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.064831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.065010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.065042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.065236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.065269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.065495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.065528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.065725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.065758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.065942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.065975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.066149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.066182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.066392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.066425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.066584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.066616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.066777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.066809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.067048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.067081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.067258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.067291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.067453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.067485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.067677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.067710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.067924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.067957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.068149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.068182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.068375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.068406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.068582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.068614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.068802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.068834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.069030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.069063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.069272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.069305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.069490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.069523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.069688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.069720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.069907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.069940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.070108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.070140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.070307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.070341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.070541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.070574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.070788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.070831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.071030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.071063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.071249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.071286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.071441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.071473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.071659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.071691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.071906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.071940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.072148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.072181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.072370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.072403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.072591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.072624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.072847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.898 [2024-07-13 22:20:35.072886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.898 qpair failed and we were unable to recover it. 00:37:15.898 [2024-07-13 22:20:35.073057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.073090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.073312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.073344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.073527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.073559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.073740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.073774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.073967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.074001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.074165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.074198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.074366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.074400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.074586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.074619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.074809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.074841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.075036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.075068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.075261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.075294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.075510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.075542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.075732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.075764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.075958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.075991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.076204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.076237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.076430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.076462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.076649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.076683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.076876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.076909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.077094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.077126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.077313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.077346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.077515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.077548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.077705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.077737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.077931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.077965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.078181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.078213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.078406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.078437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.078600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.078633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.078855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.078894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.079057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.079090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.079281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.079314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.079494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.079526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.079685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.079717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.079908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.079941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.080153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.080190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.080380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.080413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.080603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.080636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.080826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.080858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.081080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.081113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.081326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.081359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.081550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.081582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.081770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.081802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.899 [2024-07-13 22:20:35.082012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.899 [2024-07-13 22:20:35.082045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.899 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.082231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.082263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.082442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.082474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.082635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.082670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.082896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.082930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.083113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.083146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.083336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.083368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.083519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.083551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.083739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.083772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.083989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.084023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.084188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.084220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.084413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.084446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.084612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.084644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.084840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.084878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.085063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.085104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.085304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.085337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.085497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.085529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.085720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.085753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.085916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.085950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.086138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.086171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.086354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.086387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.086599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.086633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.086826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.086859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.087057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.087091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.087278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.087312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.087505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.087538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.087747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.087779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.087967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.088000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.088189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.088221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.088432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.088466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.088658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.088691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.088879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.088913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.089076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.089114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.089271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.089303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.089470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.089503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.089688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.089721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.089936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.089969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.090127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.090161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.090347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.090380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.090570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.090604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.090792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.090825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.900 qpair failed and we were unable to recover it. 00:37:15.900 [2024-07-13 22:20:35.091045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.900 [2024-07-13 22:20:35.091078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.091258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.091290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.091473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.091506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.091686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.091719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.091876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.091909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.092163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.092196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.092415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.092448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.092635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.092668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.092857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.092906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.093071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.093103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.093314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.093346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.093509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.093541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.093707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.093739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.093932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.093966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.094133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.094165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.094336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.094368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.094557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.094588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.094774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.094807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.094975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.095009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.095170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.095203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.095359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.095391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.095600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.095638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.095794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.095826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.096073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.096106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.096275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.096308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.096465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.096497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.096696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.096728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.096890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.096923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.097112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.097145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.097322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.097356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.097512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.097544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.097756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.097793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.097959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.097993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.098156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.098189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.098354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.098388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.098570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.098602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.098824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.098857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.099030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.099074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.099236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.099269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.901 [2024-07-13 22:20:35.099425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.901 [2024-07-13 22:20:35.099458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.901 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.099625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.099659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.099835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.099874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.100039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.100071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.100231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.100264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.100455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.100488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.100680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.100712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.100900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.100933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.101089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.101121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.101272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.101304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.101459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.101493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.101673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.101705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.101889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.101923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.102134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.102166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.102325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.102357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.102509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.102542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.102756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.102789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.102951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.102984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.103173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.103206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.103369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.103401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.103561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.103594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.103815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.103847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.104040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.104073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.104234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.104266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.104426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.104457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.104654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.104686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.104853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.104893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.105047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.105080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.105237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.105269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.105455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.105487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.105642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.105674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.105859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.105900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.106110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.106147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.106317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.106350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.106534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.106567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.106725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.902 [2024-07-13 22:20:35.106756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.902 qpair failed and we were unable to recover it. 00:37:15.902 [2024-07-13 22:20:35.106945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.106977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.107256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.107289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.107479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.107511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.107687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.107720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.107906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.107937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.108147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.108179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.108367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.108400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.108558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.108592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.108773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.108805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.108992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.109026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.109224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.109257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.109439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.109471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.109660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.109693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.109881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.109914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.110152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.110185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.110372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.110405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.110566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.110598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.110755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.110787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.110942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.110975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.111161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.111194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.111349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.111382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.111568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.111601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.111764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.111796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.111985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.112018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.112200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.112232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.112425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.112459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.112639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.112682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.112892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.112926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.113115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.113148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.113307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.113339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.113496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.113528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.113718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.113752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.113946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.113980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.114163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.114195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.114357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.114389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.114575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.114607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.114759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.114796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.114959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.114992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.115207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.115240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.115426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.115459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.115619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.903 [2024-07-13 22:20:35.115651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.903 qpair failed and we were unable to recover it. 00:37:15.903 [2024-07-13 22:20:35.115872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.115905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.116066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.116099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.116282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.116314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.116467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.116500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.116686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.116720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.116918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.116951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.117131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.117164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.117349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.117383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.117541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.117574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.117789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.117821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.117992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.118026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.118235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.118268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.118433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.118466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.118652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.118685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.118894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.118928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.119140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.119173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.119352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.119384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.119574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.119607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.119798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.119831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.120032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.120065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.120223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.120256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.120419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.120453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.120616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.120650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.120842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.120882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.121054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.121087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.121270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.121302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.121489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.121521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.121699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.121732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.121939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.121973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.122139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.122172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.122384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.122417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.122605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.122638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.122830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.122863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.123079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.123118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.123305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.123338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.123540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.123577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.123739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.123772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.123992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.124025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.124209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.124242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.124405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.124438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.904 [2024-07-13 22:20:35.124602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.904 [2024-07-13 22:20:35.124634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.904 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.124796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.124829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.125019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.125051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.125240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.125271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.125456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.125489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.125680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.125712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.125925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.125958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.126140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.126173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.126340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.126373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.126536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.126580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.126743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.126775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.126934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.126968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.127154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.127186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.127349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.127381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.127564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.127596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.127752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.127785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.127972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.128006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.128191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.128224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.128392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.128423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.128577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.128609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.128790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.128822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.129014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.129047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.129209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.129243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.129429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.129461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.129617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.129649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.129808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.129841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.130036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.130069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.130232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.130263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.130454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.130486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.130681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.130714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.130882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.130915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.131082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.131115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.131308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.131340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.131521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.131554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.131764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.131796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.131957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.131994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.132154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.132186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.132348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.132380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.132571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.132604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.132761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.132795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.132960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.132993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.905 [2024-07-13 22:20:35.133154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.905 [2024-07-13 22:20:35.133187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.905 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.133371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.133405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.133593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.133626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.133783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.133815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.134009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.134041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.134228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.134260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.134482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.134515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.134706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.134738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.134935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.134968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.135157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.135189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.135388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.135421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.135608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.135640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.135796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.135828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.136022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.136056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.136241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.136274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.136461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.136493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.136651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.136683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.136897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.136930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.137086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.137118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.137321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.137354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.137541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.137573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.137734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.137766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.137964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.137997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.138176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.138208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.138377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.138409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.138569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.138602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.138771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.138803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.139079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.139112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.139298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.139331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.139549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.139581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.139768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.139799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.139993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.140027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.140193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.140235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.140424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.140457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.140664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.140701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.906 qpair failed and we were unable to recover it. 00:37:15.906 [2024-07-13 22:20:35.140876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.906 [2024-07-13 22:20:35.140910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.141094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.141127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.141296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.141328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.141485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.141517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.141680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.141713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.141906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.141939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.142093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.142126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.142319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.142351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.142538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.142570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.142726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.142758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.142973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.143007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.143170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.143203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.143385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.143417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.143574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.143605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.143790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.143821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.143975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.144007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.144157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.144190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.144354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.144386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.144599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.144631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.144779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.144810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.145039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.145073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.145266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.145299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.145510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.145542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.145706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.145738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.145952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.145986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.146156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.146188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.146360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.146392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.146654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.146688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.146878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.146910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.147073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.147105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.147268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.147300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.147557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.147589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.147744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.147776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.147990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.148023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.148186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.148217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.148374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.148405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.148584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.148617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.148780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.148813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.148979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.149011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.149185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.149226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.149409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.907 [2024-07-13 22:20:35.149442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.907 qpair failed and we were unable to recover it. 00:37:15.907 [2024-07-13 22:20:35.149648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.149685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.149934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.149967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.150175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.150207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.150415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.150455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.150645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.150678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.150840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.150878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.151038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.151071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.151283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.151316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.151522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.151558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.151759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.151794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.152026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.152060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.152271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.152307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.152511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.152547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.152753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.152788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.153011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.153044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.153228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.153260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.153465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.153501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.153732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.153768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.153952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.153986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.154152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.154186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.154348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.154391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.154579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.154611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.154834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.154873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.155040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.155072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.155240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.155272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.155507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.155542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.155739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.155771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.155933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.155984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.156195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.156232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.156465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.156501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.156775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.156808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.157067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.157105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.157335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.157371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.157574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.157610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.157800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.157832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.158119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.158156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.158395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.158432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.158640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.158676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.158913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.158950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.159168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.159224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.908 [2024-07-13 22:20:35.159434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.908 [2024-07-13 22:20:35.159470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.908 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.159674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.159710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.159921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.159955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.160136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.160172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.160448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.160481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.160700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.160736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.160941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.160975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.161205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.161260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.161461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.161497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.161709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.161742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.161956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.161990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.162194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.162231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.162467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.162503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.162675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.162711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.162898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.162932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.163099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.163132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.163335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.163371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.163600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.163637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.163828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.163861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.164054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.164091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.164308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.164341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.164546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.164583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.164805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.164838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.165058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.165096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.165307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.165344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.165553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.165587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.165800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.165833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.166079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.166112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.166323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.166373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.166586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.166623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.166860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.166902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.167123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.167156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.167316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.167349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.167506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.167539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.167727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.167759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.167972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.168010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.168187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.168223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.168428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.168464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.168643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.168677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.168919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.168956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.169168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.909 [2024-07-13 22:20:35.169204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.909 qpair failed and we were unable to recover it. 00:37:15.909 [2024-07-13 22:20:35.169401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.169438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.169645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.169678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.169902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.169936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.170129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.170189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.170421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.170458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.170652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.170686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.170875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.170908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.171070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.171103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.171288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.171322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.171545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.171578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.171785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.171821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.172018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.172051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.172242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.172274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.172498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.172531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.172722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.172759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.172961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.172998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.173200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.173236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.173424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.173456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.173617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.173649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.173879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.173916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.174119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.174156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.174346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.174379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.174587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.174624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.174851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.174898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.175139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.175182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.175411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.175444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.175642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.175675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.175884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.175920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.176162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.176199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.176386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.176420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.176612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.176660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.176877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.176923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.177155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.177193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.177382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.177415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.177576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.910 [2024-07-13 22:20:35.177610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.910 qpair failed and we were unable to recover it. 00:37:15.910 [2024-07-13 22:20:35.177835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.177876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.178095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.178147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.178332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.178364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.178703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.178762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.178972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.179010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.179220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.179257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.179472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.179504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.179840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.179908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.180139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.180176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.180413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.180449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.180666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.180699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.180892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.180929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.181098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.181134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.181316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.181352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.181585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.181617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.181807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.181845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.182023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.182055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.182247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.182279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.182526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.182558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.182777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.182812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.183014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.183048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.183217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.183250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.183458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.183491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.183677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.183712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.183916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.183953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.184185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.184222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.184459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.184491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.184705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.184742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.184950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.184987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.185182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.185223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.185441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.185474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.185689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.185727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.185924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.185972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.186179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.186215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.186436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.186467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.186714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.186746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.186930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.186963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.187121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.187153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.187364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.187397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.187674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.187732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.187946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.911 [2024-07-13 22:20:35.187980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.911 qpair failed and we were unable to recover it. 00:37:15.911 [2024-07-13 22:20:35.188217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.188254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.188491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.188524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.188711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.188748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.188987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.189020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.189178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.189211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.189403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.189436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.189648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.189685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.189909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.189944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.190106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.190139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.190327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.190361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.190577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.190612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.190839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.190884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.191050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.191082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.191296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.191329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.191620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.191678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.191913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.191950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.192131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.192167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.192354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.192387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.192666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.192723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.192942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.192975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.193187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.193223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.193457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.193490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.193676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.193713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.193906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.193943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.194152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.194186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.194371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.194405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.194601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.194635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.194840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.194885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.195089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.195129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.195365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.195398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.195600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.195635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.195861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.195907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.196090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.196126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.196333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.196364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.196671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.196736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.197014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.197051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.197238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.197274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.197478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.197511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.197721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.197758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.197997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.198031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.912 [2024-07-13 22:20:35.198236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.912 [2024-07-13 22:20:35.198272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.912 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.198482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.198514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.198720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.198756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.198934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.198971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.199174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.199206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.199388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.199421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.199689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.199742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.199981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.200017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.200221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.200257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.200489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.200522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.200711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.200747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.200951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.200989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.201195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.201232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.201439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.201472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.201688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.201725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.201933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.202002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.202177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.202214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.202420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.202453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.202664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.202699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.202919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.202953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.203155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.203191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.203371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.203403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.203642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.203678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.203893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.203927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.204134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.204169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.204381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.204414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.204570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.204604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.204838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.204883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.205115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.205155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.205370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.205403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.205662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.205721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.205933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.205971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.206183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.206216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.206429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.206462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.206732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.206789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.207005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.207042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.207214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.207252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.207460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.207492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.207674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.207710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.207944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.207982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.208160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.208193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.913 [2024-07-13 22:20:35.208386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.913 [2024-07-13 22:20:35.208418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.913 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.208656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.208714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.208923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.208960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.209176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.209208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.209392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.209424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.209632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.209667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.209906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.209939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.210119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.210155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.210369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.210402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.210588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.210621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.210830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.210874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.211082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.211118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.211328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.211360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.211524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.211557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.211723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.211755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.211960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.211994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.212161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.212193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.212383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.212420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.212619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.212655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.212856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.212901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.213107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.213140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.213412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.213469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.213686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.213722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.213912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.213950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.214164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.214197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.214456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.214514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.214719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.214756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.215000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.215042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.215245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.215278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.215505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.215544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.215748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.215786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.215990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.216027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.216264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.216296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.216556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.216613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.216828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.216881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.914 qpair failed and we were unable to recover it. 00:37:15.914 [2024-07-13 22:20:35.217125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.914 [2024-07-13 22:20:35.217158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.217344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.217376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.217568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.217601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.217860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.217912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.218078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.218110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.218373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.218406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.218688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.218720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.218929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.218966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.219167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.219204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.219423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.219456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.219715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.219771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.220011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.220048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.220248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.220284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.220475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.220507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.220712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.220748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.220920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.220956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.221164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.221199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.221384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.221416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.221576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.221609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.221826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.221863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.222087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.222119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.222313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.222345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.222657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.222717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.223012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.223050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.223288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.223324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.223560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.223592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.223789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.223822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.224017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.224050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.224232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.224268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.224447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.224479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.224638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.224670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.224862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.224907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.225098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.225136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.225325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.225358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.225661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.225723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.225954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.225991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.226220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.226256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.226490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.226522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.226738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.226774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.226986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.227023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.227207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.915 [2024-07-13 22:20:35.227243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.915 qpair failed and we were unable to recover it. 00:37:15.915 [2024-07-13 22:20:35.227448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.227480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.227662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.227695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.227934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.227971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.228150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.228187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.228392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.228424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.228643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.228681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.228919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.228956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.229195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.229229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.229415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.229449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.229813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.229875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.230134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.230167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.230376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.230414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.230617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.230650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.230839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.230895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.231145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.231178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.231363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.231396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.231594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.231627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.231859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.231905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.232121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.232159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.232366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.232402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.232610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.232642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.232884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.232922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.233143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.233179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.233362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.233400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.233639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.233671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.233872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.233918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.234136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.234183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.234426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.234459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.234657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.234689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.234899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.234932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.235168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.235205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.235410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.235451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.235649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.235682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.235903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.235939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.236177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.236213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.236425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.236457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.236669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.236701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.236982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.237019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.237227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.237263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.237464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.237500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.916 [2024-07-13 22:20:35.237712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.916 [2024-07-13 22:20:35.237745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.916 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.237984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.238026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.238236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.238270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.238470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.238507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.238718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.238751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.239024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.239083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.239308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.239345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.239532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.239568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.239758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.239790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.240008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.240044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.240283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.240319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.240512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.240548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.240785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.240817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.241040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.241077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.241283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.241318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.241550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.241586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.241821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.241854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.242086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.242122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.242357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.242393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.242622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.242658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.242874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.242907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.243143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.243179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.243392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.243428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.243610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.243648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.243835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.243875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.244099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.244135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.244360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.244397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.244640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.244673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.244859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.244900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.245141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.245178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.245388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.245422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.245612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.245649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.245838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.245878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.246105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.246141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.246340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.246376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.246576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.246608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.246825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.246858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.247083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.247119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.247335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.247368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.247545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.247577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.247738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.247771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.917 qpair failed and we were unable to recover it. 00:37:15.917 [2024-07-13 22:20:35.248100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.917 [2024-07-13 22:20:35.248162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.248367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.248405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.248610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.248642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.248828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.248859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.249101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.249139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.249372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.249408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.249614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.249650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.249889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.249929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.250328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.250393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.250632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.250680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.250894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.250931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.251120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.251153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.251315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.251349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.251586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.251623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.251835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.251880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.252085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.252118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.252306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.252338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.252531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.252563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.252813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.252849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.253077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.253110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.253323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.253360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.253566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.253603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.253816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.253853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.254084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.254117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.254430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.254493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.254747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.254779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.254952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.254986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.255216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.255249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.255560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.255627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.255843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.255888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.256104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.256156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.256372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.256405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.256621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.256672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.256848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.256898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.257123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.257156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.257311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.257343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.257513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.257545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.257777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.257809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.258007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.258055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.258296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.258329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.258690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.918 [2024-07-13 22:20:35.258767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.918 qpair failed and we were unable to recover it. 00:37:15.918 [2024-07-13 22:20:35.259007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.919 [2024-07-13 22:20:35.259043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.919 qpair failed and we were unable to recover it. 00:37:15.919 [2024-07-13 22:20:35.259213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:15.919 [2024-07-13 22:20:35.259250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:15.919 qpair failed and we were unable to recover it. 00:37:16.201 [2024-07-13 22:20:35.259465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.201 [2024-07-13 22:20:35.259498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.201 qpair failed and we were unable to recover it. 00:37:16.201 [2024-07-13 22:20:35.259708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.201 [2024-07-13 22:20:35.259743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.201 qpair failed and we were unable to recover it. 00:37:16.201 [2024-07-13 22:20:35.259972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.201 [2024-07-13 22:20:35.260007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.201 qpair failed and we were unable to recover it. 00:37:16.201 [2024-07-13 22:20:35.260190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.201 [2024-07-13 22:20:35.260226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.201 qpair failed and we were unable to recover it. 00:37:16.201 [2024-07-13 22:20:35.260465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.201 [2024-07-13 22:20:35.260498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.201 qpair failed and we were unable to recover it. 00:37:16.201 [2024-07-13 22:20:35.260692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.201 [2024-07-13 22:20:35.260730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.201 qpair failed and we were unable to recover it. 00:37:16.201 [2024-07-13 22:20:35.260962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.201 [2024-07-13 22:20:35.260999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.201 qpair failed and we were unable to recover it. 00:37:16.201 [2024-07-13 22:20:35.261198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.201 [2024-07-13 22:20:35.261232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.201 qpair failed and we were unable to recover it. 00:37:16.201 [2024-07-13 22:20:35.261424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.201 [2024-07-13 22:20:35.261457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.201 qpair failed and we were unable to recover it. 00:37:16.201 [2024-07-13 22:20:35.261649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.201 [2024-07-13 22:20:35.261681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.201 qpair failed and we were unable to recover it. 00:37:16.201 [2024-07-13 22:20:35.261842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.201 [2024-07-13 22:20:35.261882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.201 qpair failed and we were unable to recover it. 00:37:16.201 [2024-07-13 22:20:35.262084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.201 [2024-07-13 22:20:35.262116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.201 qpair failed and we were unable to recover it. 00:37:16.201 [2024-07-13 22:20:35.262300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.201 [2024-07-13 22:20:35.262332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.201 qpair failed and we were unable to recover it. 00:37:16.201 [2024-07-13 22:20:35.262491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.201 [2024-07-13 22:20:35.262524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.201 qpair failed and we were unable to recover it. 00:37:16.201 [2024-07-13 22:20:35.262684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.201 [2024-07-13 22:20:35.262716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.201 qpair failed and we were unable to recover it. 00:37:16.201 [2024-07-13 22:20:35.262886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.201 [2024-07-13 22:20:35.262936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.201 qpair failed and we were unable to recover it. 00:37:16.201 [2024-07-13 22:20:35.263142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.201 [2024-07-13 22:20:35.263174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.201 qpair failed and we were unable to recover it. 00:37:16.201 [2024-07-13 22:20:35.263379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.201 [2024-07-13 22:20:35.263411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.201 qpair failed and we were unable to recover it. 00:37:16.201 [2024-07-13 22:20:35.263599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.201 [2024-07-13 22:20:35.263632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.201 qpair failed and we were unable to recover it. 00:37:16.201 [2024-07-13 22:20:35.263783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.201 [2024-07-13 22:20:35.263816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.201 qpair failed and we were unable to recover it. 00:37:16.201 [2024-07-13 22:20:35.263995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.201 [2024-07-13 22:20:35.264028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.201 qpair failed and we were unable to recover it. 00:37:16.201 [2024-07-13 22:20:35.264200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.201 [2024-07-13 22:20:35.264232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.201 qpair failed and we were unable to recover it. 00:37:16.201 [2024-07-13 22:20:35.264416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.201 [2024-07-13 22:20:35.264449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.201 qpair failed and we were unable to recover it. 00:37:16.201 [2024-07-13 22:20:35.264604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.201 [2024-07-13 22:20:35.264637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.201 qpair failed and we were unable to recover it. 00:37:16.201 [2024-07-13 22:20:35.264795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.201 [2024-07-13 22:20:35.264828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.201 qpair failed and we were unable to recover it. 00:37:16.201 [2024-07-13 22:20:35.265023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.201 [2024-07-13 22:20:35.265056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.201 qpair failed and we were unable to recover it. 00:37:16.201 [2024-07-13 22:20:35.265212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.201 [2024-07-13 22:20:35.265245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.201 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.265406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.265442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.265610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.265643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.265884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.265918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.266103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.266165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.266369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.266406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.266592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.266626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.266842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.266884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.267041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.267075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.267243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.267275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.267493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.267525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.267684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.267715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.267927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.267961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.268146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.268179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.268355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.268388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.268554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.268588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.268749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.268781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.268964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.268998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.269187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.269220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.269405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.269441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.269628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.269666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.269901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.269938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.270136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.270169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.270335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.270368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.270552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.270585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.270797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.270831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.271006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.271039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.271220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.271254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.271446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.271479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.271695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.271728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.271887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.271920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.272088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.272120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.272287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.272320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.272508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.272541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.272725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.272758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.272970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.273004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.273225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.273259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.273496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.273532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.273715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.273767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.202 [2024-07-13 22:20:35.273982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.202 [2024-07-13 22:20:35.274016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.202 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.274172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.274205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.274390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.274427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.274611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.274644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.274807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.274840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.275047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.275080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.275236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.275269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.275452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.275484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.275650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.275683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.275871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.275905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.276069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.276102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.276292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.276325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.276515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.276548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.276704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.276737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.276901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.276934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.277147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.277180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.277379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.277413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.277620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.277657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.277880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.277914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.278095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.278128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.278323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.278357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.278517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.278549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.278714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.278750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.278910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.278944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.279107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.279141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.279325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.279358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.279540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.279573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.279762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.279795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.279993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.280027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.280247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.280295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.280505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.280541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.280742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.280775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.280966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.281000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.281191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.281225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.281409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.281443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.281602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.281635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.281816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.281852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.282045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.282079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.282309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.282345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.282580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.282613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.203 [2024-07-13 22:20:35.282804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.203 [2024-07-13 22:20:35.282837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.203 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.283026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.283060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.283271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.283309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.283516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.283549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.283705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.283738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.283926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.283959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.284154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.284186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.284351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.284384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.284562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.284599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.284819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.284853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.285047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.285080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.285237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.285270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.285450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.285481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.285645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.285677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.285833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.285884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.286047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.286080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.286243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.286275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.286437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.286469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.286654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.286685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.286896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.286929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.287091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.287144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.287353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.287389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.287594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.287630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.287860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.287902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.288067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.288099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.288290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.288322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.288508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.288540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.288699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.288731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.288965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.289003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.289209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.289246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.289466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.289503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.289714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.289747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.289910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.289942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.290147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.290183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.290380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.290416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.290619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.290653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.290841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.290886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.291094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.291130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.291329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.291365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.291603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.291635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.204 [2024-07-13 22:20:35.291809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.204 [2024-07-13 22:20:35.291862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.204 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.292072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.292109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.292291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.292326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.292507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.292540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.292747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.292783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.292995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.293031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.293216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.293267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.293478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.293510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.293699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.293732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.293913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.293950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.294151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.294189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.294395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.294427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.294595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.294628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.294795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.294836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.295071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.295135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.295375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.295425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.295785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.295836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.296067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.296106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.296314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.296353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.296572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.296607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.296773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.296806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.297025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.297060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.297311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.297348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.297590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.297624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.297845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.297901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.298106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.298143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.298495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.298552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.298763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.298797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.298965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.298998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.299187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.299225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.299548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.299611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.299827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.299861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.205 [2024-07-13 22:20:35.300067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.205 [2024-07-13 22:20:35.300104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.205 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.300337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.300370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.300572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.300625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.300862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.300907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.301159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.301196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.301406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.301443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.301678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.301711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.301904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.301938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.302138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.302175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.302382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.302419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.302706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.302762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.302982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.303015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.303184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.303216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.303400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.303433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.303622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.303655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.303896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.303929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.304116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.304153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.304385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.304421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.304600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.304636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.304847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.304886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.305099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.305134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.305348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.305384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.305636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.305669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.305859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.305909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.306138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.306171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.306388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.306424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.306642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.306678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.306896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.306929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.307168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.307205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.307407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.307443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.307654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.307687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.307876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.307909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.308149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.308186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.308418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.308454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.308840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.308917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.309126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.309160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.309475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.309544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.309760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.309796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.310009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.310046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.310234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.310268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.206 [2024-07-13 22:20:35.310532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.206 [2024-07-13 22:20:35.310565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.206 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.310754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.310787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.311050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.311087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.311315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.311348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.311692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.311747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.311979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.312015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.312193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.312231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.312497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.312531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.312724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.312760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.312993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.313029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.313239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.313272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.313484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.313516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.313754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.313790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.314039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.314078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.314289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.314325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.314559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.314592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.314765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.314810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.315064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.315099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.315317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.315355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.315548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.315581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.315794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.315846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.316069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.316103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.316366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.316434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.316643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.316676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.316856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.316898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.317103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.317151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.317469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.317527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.317711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.317744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.317955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.317999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.318203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.318239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.318512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.318551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.318766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.318799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.319015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.319054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.319244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.319280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.319549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.319585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.319790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.319822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.320056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.320089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.320299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.320340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.320694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.320754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.320939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.320974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.321145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.207 [2024-07-13 22:20:35.321178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.207 qpair failed and we were unable to recover it. 00:37:16.207 [2024-07-13 22:20:35.321400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.321436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.321672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.321704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.321860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.321902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.322089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.322138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.322350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.322386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.322617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.322654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.322878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.322911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.323137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.323171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.323356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.323392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.323623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.323659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.323874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.323907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.324097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.324133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.324314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.324351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.324532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.324568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.324757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.324790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.325000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.325037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.325249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.325285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.325453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.325490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.325688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.325721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.326007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.326064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.326278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.326310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.326492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.326528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.326738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.326771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.327031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.327087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.327282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.327319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.327503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.327539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.327728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.327762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.328012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.328071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.328301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.328338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.328575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.328612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.328819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.328852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.329120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.329153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.329372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.329424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.329638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.329672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.329872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.329917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.330131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.330167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.330373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.330414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.330644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.330677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.330881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.330915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.208 [2024-07-13 22:20:35.331117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.208 [2024-07-13 22:20:35.331149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.208 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.331378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.331414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.331625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.331657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.331844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.331886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.332099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.332133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.332312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.332349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.332586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.332619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.332781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.332814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.333024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.333062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.333271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.333320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.333521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.333558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.333781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.333813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.334029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.334063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.334278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.334316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.334506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.334540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.334706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.334738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.334920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.334953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.335155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.335189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.335353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.335386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.335575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.335608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.335813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.335845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.336063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.336099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.336306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.336342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.336531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.336565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.336800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.336836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.337084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.337116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.337321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.337354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.337562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.337595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.337807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.337843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.338073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.338110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.338292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.338329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.338522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.338554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.338761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.338798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.339029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.339063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.339291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.339327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.339513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.339545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.339729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.339765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.339974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.340024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.340233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.340269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.340486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.340518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.209 qpair failed and we were unable to recover it. 00:37:16.209 [2024-07-13 22:20:35.340711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.209 [2024-07-13 22:20:35.340744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.340955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.340991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.341189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.341225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.341432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.341465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.341689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.341725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.341969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.342015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.342229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.342265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.342468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.342501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.342704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.342740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.342967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.343013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.343207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.343241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.343444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.343477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.343671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.343707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.343882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.343926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.344127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.344170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.344354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.344387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.344583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.344615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.344811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.344845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.345048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.345081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.345271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.345303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.345463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.345497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.345707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.345739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.345926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.345963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.346222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.346258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.346458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.346492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.346681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.346714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.346876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.346909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.347104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.347138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.347296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.347329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.347517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.347550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.347704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.347736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.347953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.347987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.348163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.348196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.348409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.348456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.348703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.210 [2024-07-13 22:20:35.348736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.210 qpair failed and we were unable to recover it. 00:37:16.210 [2024-07-13 22:20:35.348924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.348957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.349145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.349178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.349363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.349399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.349610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.349646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.349823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.349856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.350067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.350099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.350318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.350351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.350509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.350543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.350730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.350763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.350933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.350966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.351154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.351186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.351352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.351384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.351603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.351635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.351795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.351827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.352017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.352050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.352259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.352291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.352446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.352479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.352648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.352684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.352893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.352939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.353125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.353158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.353351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.353383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.353572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.353608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.353839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.353882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.354075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.354108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.354296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.354329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.354512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.354545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.354707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.354739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.354908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.354942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.355122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.355155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.355353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.355386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.355546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.355579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.355783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.355819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.356044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.356077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.356236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.356268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.356453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.356486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.356671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.356704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.356894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.356929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.357109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.357146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.357355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.357392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.357583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.211 [2024-07-13 22:20:35.357620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.211 qpair failed and we were unable to recover it. 00:37:16.211 [2024-07-13 22:20:35.357828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.357860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.358037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.358070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.358318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.358358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.358542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.358580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.358797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.358830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.359058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.359095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.359274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.359310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.359514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.359550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.359752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.359785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.359936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.359970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.360168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.360201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.360395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.360427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.360609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.360641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.360860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.360914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.361105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.361137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.361352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.361404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.361613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.361645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.361856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.361902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.362118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.362155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.362343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.362375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.362583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.362616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.362874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.362913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.363124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.363171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.363378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.363414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.363607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.363640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.363853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.363899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.364093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.364126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.364331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.364367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.364560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.364592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.364760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.364793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.365029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.365063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.365274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.365310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.365500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.365533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.365723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.365755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.365942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.365975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.366139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.366172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.366327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.366359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.366572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.366609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.366837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.366880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.367076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.212 [2024-07-13 22:20:35.367112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.212 qpair failed and we were unable to recover it. 00:37:16.212 [2024-07-13 22:20:35.367321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.367355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.367617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.367678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.367925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.367967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.368179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.368215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.368431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.368464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.368682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.368715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.368918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.368955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.369185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.369221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.369455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.369487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.369736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.369772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.369953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.369990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.370202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.370238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.370420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.370453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.370778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.370845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.371082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.371118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.371351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.371388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.371578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.371612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.371852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.371895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.372079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.372115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.372300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.372336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.372518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.372550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.372734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.372770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.372978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.373017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.373228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.373261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.373438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.373471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.373687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.373724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.373936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.373969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.374152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.374196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.374446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.374478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.374734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.374767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.374975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.375012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.375250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.375286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.375518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.375550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.375758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.375796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.376003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.376037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.376231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.376264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.376476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.376509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.376744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.376780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.377009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.377047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.377243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.213 [2024-07-13 22:20:35.377279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.213 qpair failed and we were unable to recover it. 00:37:16.213 [2024-07-13 22:20:35.377525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.377557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.377746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.377782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.377990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.378032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.378242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.378280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.378455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.378488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.378702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.378738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.378959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.379004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.379200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.379251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.379464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.379496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.379743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.379776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.379971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.380021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.380228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.380265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.380505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.380538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.380776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.380809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.380978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.381011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.381196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.381228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.381422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.381455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.381735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.381772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.381982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.382019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.382198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.382235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.382446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.382480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.382735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.382771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.383010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.383047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.383250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.383287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.383471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.383505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.383666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.383699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.383862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.383901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.384085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.384118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.384301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.384333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.384592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.384650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.384896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.384929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.385093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.385126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.385315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.385348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.385647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.385683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.214 qpair failed and we were unable to recover it. 00:37:16.214 [2024-07-13 22:20:35.385880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.214 [2024-07-13 22:20:35.385918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.386127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.386163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.386398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.386430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.386764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.386822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.387047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.387081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.387272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.387309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.387523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.387555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.387768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.387804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.388024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.388063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.388252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.388289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.388485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.388517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.388725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.388761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.388946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.388983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.389189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.389226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.389451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.389484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.389700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.389737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.389967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.390004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.390214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.390246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.390397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.390430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.390663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.390699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.390903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.390940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.391126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.391160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.391381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.391414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.391602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.391638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.391852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.391891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.392097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.392133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.392371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.392403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.392612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.392648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.392896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.392934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.393114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.393151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.393359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.393391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.393636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.393694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.393903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.393940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.394147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.394183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.394389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.394422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.394713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.394773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.395008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.395051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.395284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.395320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.395510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.395543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.395725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.395761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.215 [2024-07-13 22:20:35.395998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.215 [2024-07-13 22:20:35.396035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.215 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.396240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.396276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.396521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.396554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.396740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.396776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.396989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.397027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.397235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.397272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.397457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.397489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.397724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.397760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.397969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.398011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.398249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.398281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.398472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.398504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.398764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.398796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.398961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.398994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.399185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.399218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.399432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.399465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.399670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.399706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.399920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.399957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.400168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.400200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.400419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.400452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.400672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.400706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.400989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.401027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.401224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.401261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.401471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.401504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.401719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.401757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.401989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.402026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.402231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.402268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.402443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.402477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.402714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.402750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.402979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.403016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.403225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.403258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.403476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.403509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.403720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.403756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.403964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.404001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.404240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.404272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.404460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.404494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.404687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.404723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.404949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.404986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.405167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.405203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.405392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.405426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.405664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.405700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.405906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.216 [2024-07-13 22:20:35.405951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.216 qpair failed and we were unable to recover it. 00:37:16.216 [2024-07-13 22:20:35.406133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.406169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.406351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.406384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.406616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.406652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.406875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.406912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.407089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.407126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.407367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.407400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.407640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.407673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.407906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.407948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.408148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.408184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.408389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.408421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.408585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.408618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.408862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.408908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.409135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.409171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.409375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.409407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.409610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.409646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.409880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.409917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.410119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.410155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.410365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.410397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.410678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.410737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.410940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.410988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.411195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.411232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.411485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.411518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.411732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.411768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.411978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.412015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.412247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.412280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.412476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.412510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.412720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.412756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.412948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.412983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.413185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.413221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.413464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.413496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.413702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.413738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.413950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.413983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.414176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.414225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.414459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.414491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.414705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.414741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.414965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.415002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.415243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.415276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.415486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.415519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.415709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.415743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.415918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.415952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.217 qpair failed and we were unable to recover it. 00:37:16.217 [2024-07-13 22:20:35.416201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.217 [2024-07-13 22:20:35.416237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.416443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.416476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.416684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.416720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.416931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.416964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.417156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.417189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.417386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.417420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.417608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.417641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.417808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.417845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.418095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.418132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.418329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.418363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.418620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.418676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.418861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.418908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.419112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.419149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.419337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.419370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.419555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.419587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.419809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.419846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.420053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.420087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.420298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.420331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.420690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.420758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.420960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.420997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.421200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.421236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.421433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.421466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.421650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.421683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.421929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.421966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.422188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.422221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.422404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.422438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.422624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.422661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.422890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.422927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.423098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.423135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.423371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.423403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.423765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.423825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.424069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.424102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.424316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.424352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.424561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.218 [2024-07-13 22:20:35.424595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.218 qpair failed and we were unable to recover it. 00:37:16.218 [2024-07-13 22:20:35.424759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.424796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.425006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.425043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.425248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.425285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.425498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.425531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.425717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.425753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.425962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.425999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.426181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.426217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.426399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.426433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.426606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.426639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.426830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.426896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.427117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.427151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.427336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.427369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.427578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.427614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.427849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.427888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.428133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.428169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.428369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.428402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.428692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.428754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.428984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.429021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.429205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.429241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.429416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.429448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.429661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.429697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.429880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.429917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.430158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.430191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.430348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.430381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.430567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.430601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.430817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.430853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.431064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.431102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.431288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.431322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.431601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.431656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.431903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.431940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.432129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.432166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.432359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.432391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.432569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.432605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.432810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.432846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.433041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.433078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.433309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.433342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.433712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.433776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.433989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.434026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.434204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.434241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.434453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.219 [2024-07-13 22:20:35.434485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.219 qpair failed and we were unable to recover it. 00:37:16.219 [2024-07-13 22:20:35.434675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.434713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.434919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.434956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.435187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.435223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.435411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.435443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.435618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.435654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.435872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.435905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.436115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.436152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.436360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.436392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.436690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.436754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.436983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.437020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.437204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.437241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.437437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.437471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.437743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.437801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.438016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.438059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.438270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.438307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.438491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.438525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.438758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.438795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.439000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.439038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.439247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.439284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.439476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.439508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.439721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.439758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.439935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.439971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.440179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.440211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.440396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.440428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.440803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.440882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.441073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.441109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.441317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.441353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.441598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.441630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.441824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.441857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.442075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.442111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.442342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.442374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.442561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.442593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.442841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.442888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.443124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.443171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.443380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.443415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.443625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.443659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.443878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.443915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.444093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.444129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.444339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.444371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.444528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.444562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.220 [2024-07-13 22:20:35.444744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.220 [2024-07-13 22:20:35.444781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.220 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.444990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.445027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.445233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.445269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.445470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.445502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.445734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.445771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.446002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.446036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.446216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.446252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.446495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.446528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.446753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.446787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.446996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.447033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.447268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.447304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.447478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.447511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.447717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.447753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.447967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.448005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.448213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.448249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.448431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.448464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.448677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.448713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.448899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.448933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.449139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.449175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.449404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.449437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.449651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.449687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.449888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.449925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.450095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.450131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.450336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.450368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.450581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.450618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.450854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.450897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.451126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.451158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.451351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.451384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.451704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.451758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.452003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.452040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.452246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.452282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.452492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.452524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.452714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.452750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.452992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.453028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.453216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.453252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.453458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.453491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.453726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.453763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.453962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.453999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.454211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.454244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.454431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.454465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.454678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.221 [2024-07-13 22:20:35.454719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.221 qpair failed and we were unable to recover it. 00:37:16.221 [2024-07-13 22:20:35.454960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.454996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.455167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.455203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.455387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.455420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.455626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.455662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.455834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.455878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.456067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.456103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.456313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.456346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.456638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.456700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.456911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.456948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.457155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.457191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.457380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.457413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.457715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.457772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.457972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.458009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.458191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.458228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.458433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.458466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.458677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.458715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.458961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.459028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.459266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.459302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.459493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.459526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.459717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.459750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.459971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.460007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.460216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.460252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.460461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.460493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.460705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.460740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.460980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.461013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.461249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.461285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.461499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.461532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.461687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.461721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.461913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.461946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.462159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.462210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.462405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.462438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.462646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.462681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.462886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.462923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.463139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.463173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.463361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.463394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.463721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.463785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.464026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.464064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.464271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.464308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.464521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.464554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.464794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.464835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.222 [2024-07-13 22:20:35.465074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.222 [2024-07-13 22:20:35.465107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.222 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.465355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.465388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.465577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.465610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.465774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.465807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.465991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.466025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.466231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.466267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.466497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.466530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.466745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.466782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.467010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.467047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.467255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.467291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.467478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.467511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.467752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.467789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.468027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.468063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.468241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.468278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.468488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.468521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.468755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.468791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.468994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.469031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.469270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.469307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.469514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.469547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.469761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.469799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.470046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.470079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.470328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.470365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.470555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.470587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.470777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.470810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.471002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.471035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.471193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.471226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.471413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.471446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.471659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.471709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.471961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.471998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.472209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.472245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.472426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.472459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.472615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.472648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.472837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.472890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.473101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.473137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.473372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.473405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.223 qpair failed and we were unable to recover it. 00:37:16.223 [2024-07-13 22:20:35.473800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.223 [2024-07-13 22:20:35.473888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.474099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.474136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.474325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.474359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.474569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.474602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.474846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.474891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.475081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.475143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.475330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.475367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.475555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.475587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.475778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.475811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.476058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.476090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.476271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.476307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.476493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.476525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.476684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.476716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.476903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.476936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.477130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.477166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.477386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.477419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.477610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.477642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.477893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.477930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.478116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.478153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.478367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.478399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.478587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.478621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.478821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.478857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.479074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.479110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.479323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.479356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.479540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.479573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.479808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.479845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.480061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.480098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.480305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.480338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.480708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.480778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.480985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.481024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.481238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.481274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.481469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.481502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.481670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.481702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.481863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.481902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.482065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.482099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.482310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.482342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.482653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.482711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.482946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.482980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.483219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.483252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.483437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.483470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.483673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.224 [2024-07-13 22:20:35.483709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.224 qpair failed and we were unable to recover it. 00:37:16.224 [2024-07-13 22:20:35.483882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.483919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.484101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.484139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.484346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.484378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.484632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.484668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.484904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.484942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.485164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.485198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.485386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.485419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.485705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.485761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.485998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.486031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.486261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.486298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.486477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.486511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.486749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.486785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.486995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.487032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.487263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.487299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.487536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.487569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.487763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.487799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.488006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.488044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.488263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.488297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.488513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.488546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.488768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.488801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.489009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.489047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.489254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.489292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.489503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.489535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.489752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.489788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.490021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.490058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.490268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.490305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.490516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.490549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.490750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.490783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.490960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.491007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.491196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.491230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.491421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.491455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.491661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.491695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.491886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.491919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.492089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.492122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.492370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.492402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.492618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.492655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.492888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.492925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.493163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.493200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.493400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.493433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.493701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.493759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.225 [2024-07-13 22:20:35.493993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.225 [2024-07-13 22:20:35.494030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.225 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.494262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.494295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.494515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.494548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.494772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.494813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.495029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.495063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.495244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.495281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.495457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.495490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.495674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.495711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.495928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.495966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.496174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.496210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.496422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.496454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.496675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.496708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.496916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.496955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.497185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.497218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.497442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.497474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.497723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.497756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.497972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.498023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.498238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.498275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.498524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.498557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.498781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.498814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.498991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.499024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.499216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.499249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.499503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.499536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.499738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.499774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.499977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.500014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.500244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.500280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.500508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.500541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.500754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.500790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.500964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.501001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.501241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.501278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.501467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.501500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.501711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.501748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.501965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.501998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.502183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.502215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.502402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.502435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.502770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.502826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.503064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.503097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.503341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.503373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.503562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.503595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.503804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.503840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.504056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.226 [2024-07-13 22:20:35.504093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.226 qpair failed and we were unable to recover it. 00:37:16.226 [2024-07-13 22:20:35.504300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.504336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.504519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.504552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.504733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.504773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.504984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.505021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.505195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.505231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.505409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.505442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.505621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.505696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.505877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.505914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.506092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.506128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.506337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.506369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.506537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.506569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.506771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.506817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.507003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.507040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.507269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.507301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.507578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.507614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.507784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.507821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.508043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.508080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.508318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.508350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.508710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.508772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.508979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.509016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.509196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.509232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.509434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.509468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.509679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.509716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.509918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.509955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.510188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.510224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.510457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.510490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.510710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.510747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.510952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.510989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.511198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.511235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.511470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.511503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.511715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.511753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.511972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.512009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.512218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.512254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.512468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.512500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.227 qpair failed and we were unable to recover it. 00:37:16.227 [2024-07-13 22:20:35.512714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.227 [2024-07-13 22:20:35.512750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.512977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.513014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.513246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.513282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.513467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.513499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.513707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.513743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.513970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.514007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.514233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.514271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.514454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.514487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.514692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.514733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.514973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.515010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.515187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.515223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.515435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.515468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.515678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.515714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.515902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.515939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.516146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.516182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.516397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.516431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.516675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.516712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.516936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.516974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.517160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.517198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.517431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.517464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.517659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.517696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.517933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.517966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.518137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.518186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.518393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.518425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.518631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.518668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.518853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.518904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.519118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.519151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.519313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.519345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.519620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.519679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.519914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.519951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.520150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.520186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.520357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.520390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.520547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.520579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.520742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.520776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.520984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.521021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.521224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.521257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.521528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.521589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.521797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.521833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.522018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.522055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.522268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.522300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.522500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.228 [2024-07-13 22:20:35.522549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.228 qpair failed and we were unable to recover it. 00:37:16.228 [2024-07-13 22:20:35.522779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.522827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.523014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.523051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.523231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.523264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.523455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.523487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.523700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.523735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.523917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.523954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.524137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.524170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.524443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.524505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.524748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.524780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.525028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.525065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.525286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.525318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.525592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.525650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.525887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.525920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.526106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.526139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.526364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.526397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.526611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.526647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.526854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.526897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.527134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.527166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.527354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.527387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.527622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.527658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.527890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.527927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.528137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.528174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.528365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.528397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.528608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.528644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.528887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.528920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.529090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.529123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.529344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.529377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.529678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.529741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.529946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.529983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.530214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.530251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.530462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.530495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.530686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.530719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.530908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.530945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.531150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.531186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.531378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.531411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.531717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.531784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.531986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.532022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.532197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.532234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.532420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.532452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.532660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.229 [2024-07-13 22:20:35.532696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.229 qpair failed and we were unable to recover it. 00:37:16.229 [2024-07-13 22:20:35.532895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.532932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.533114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.533150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.533359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.533392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.533718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.533783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.534012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.534050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.534295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.534332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.534515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.534548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.534751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.534795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.535000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.535038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.535223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.535259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.535461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.535494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.535655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.535687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.535888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.535924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.536133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.536169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.536377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.536409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.536662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.536694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.536889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.536924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.537113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.537149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.537382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.537415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.537741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.537797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.538001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.538037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.538278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.538310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.538529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.538561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.538815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.538852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.539069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.539117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.539332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.539368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.539580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.539613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.539792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.539829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.540064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.540097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.540277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.540315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.540500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.540533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.540778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.540814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.541056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.541089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.541333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.541369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.541589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.541622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.541834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.541885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.542067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.542104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.542311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.542347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.542590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.542623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.542831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.542877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.543088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.230 [2024-07-13 22:20:35.543124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.230 qpair failed and we were unable to recover it. 00:37:16.230 [2024-07-13 22:20:35.543331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.543366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.543547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.543580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.543791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.543828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.544080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.544113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.544297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.544333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.544537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.544569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.544734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.544772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.545012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.545049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.545261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.545297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.545509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.545542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.545705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.545738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.545947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.545983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.546191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.546228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.546430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.546463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.546706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.546743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.546943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.546980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.547188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.547224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.547467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.547500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.547714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.547750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.547937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.547971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.548204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.548240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.548453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.548486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.548702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.548738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.548943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.548980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.549200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.549236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.549440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.549472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.549639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.549672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.549861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.549914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.550111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.550147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.550353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.550385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.550707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.550773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.550989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.551027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.551254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.551291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.551467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.551504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.231 [2024-07-13 22:20:35.551689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.231 [2024-07-13 22:20:35.551725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.231 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.551928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.551965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.552164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.552201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.552425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.552458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.552624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.552657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.552839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.552881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.553116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.553152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.553380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.553413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.553668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.553725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.553899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.553936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.554142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.554178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.554380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.554413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.554815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.554884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.555094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.555142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.555345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.555382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.555596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.555628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.555846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.555890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.556115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.556151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.556353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.556390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.556567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.556601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.556840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.556884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.557090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.557126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.557365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.557401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.557635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.557668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.557881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.557918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.558100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.558138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.558372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.558404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.558600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.558633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.558844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.558889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.559126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.559162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.559357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.559393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.559584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.559616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.559804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.559836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.560127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.560161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.560326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.560360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.560608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.560641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.560882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.560919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.561130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.561163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.561374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.561424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.561641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.561678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.232 [2024-07-13 22:20:35.561836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.232 [2024-07-13 22:20:35.561877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.232 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.562078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.562114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.562314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.562351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.562561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.562593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.562806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.562842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.563057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.563093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.563298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.563334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.563544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.563577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.563823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.563858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.564116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.564153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.564386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.564422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.564643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.564675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.564915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.564951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.565146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.565179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.565350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.565384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.565605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.565640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.565858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.565910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.566090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.566126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.566335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.566367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.566581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.566614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.566805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.566842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.567080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.567117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.567296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.567332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.567538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.567570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.567777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.567815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.568039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.568073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.568273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.568306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.568520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.568553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.568728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.568766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.569000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.569037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.569278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.569311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.569519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.569551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.569793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.569829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.570023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.570057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.570274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.570326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.570555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.570588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.570809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.570846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.571066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.571113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.571324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.571360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.571573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.571611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.571851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.233 [2024-07-13 22:20:35.571894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.233 qpair failed and we were unable to recover it. 00:37:16.233 [2024-07-13 22:20:35.572078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.234 [2024-07-13 22:20:35.572115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.234 qpair failed and we were unable to recover it. 00:37:16.234 [2024-07-13 22:20:35.572291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.234 [2024-07-13 22:20:35.572328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.234 qpair failed and we were unable to recover it. 00:37:16.234 [2024-07-13 22:20:35.572563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.234 [2024-07-13 22:20:35.572595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.234 qpair failed and we were unable to recover it. 00:37:16.234 [2024-07-13 22:20:35.572803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.234 [2024-07-13 22:20:35.572841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.234 qpair failed and we were unable to recover it. 00:37:16.234 [2024-07-13 22:20:35.573094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.521 [2024-07-13 22:20:35.573131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.521 qpair failed and we were unable to recover it. 00:37:16.521 [2024-07-13 22:20:35.573353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.521 [2024-07-13 22:20:35.573386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.521 qpair failed and we were unable to recover it. 00:37:16.521 [2024-07-13 22:20:35.573599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.521 [2024-07-13 22:20:35.573632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.521 qpair failed and we were unable to recover it. 00:37:16.521 [2024-07-13 22:20:35.573878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.521 [2024-07-13 22:20:35.573915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.521 qpair failed and we were unable to recover it. 00:37:16.521 [2024-07-13 22:20:35.574175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.521 [2024-07-13 22:20:35.574208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.521 qpair failed and we were unable to recover it. 00:37:16.521 [2024-07-13 22:20:35.574394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.521 [2024-07-13 22:20:35.574426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.521 qpair failed and we were unable to recover it. 00:37:16.521 [2024-07-13 22:20:35.574615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.574647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.574840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.574886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.575087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.575123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.575310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.575347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.575542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.575575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.575739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.575771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.575959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.575992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.576181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.576214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.576373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.576406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.576600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.576666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.576844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.576888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.577092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.577128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.577364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.577397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.577689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.577747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.577975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.578012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.578251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.578288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.578498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.578530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.578742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.578778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.578959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.578996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.579229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.579262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.579447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.579480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.579690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.579726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.579936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.579974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.580151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.580188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.580389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.580423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.580658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.580710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.580948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.581001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.581274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.581327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.581550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.581602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.581854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.581903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.522 qpair failed and we were unable to recover it. 00:37:16.522 [2024-07-13 22:20:35.582139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.522 [2024-07-13 22:20:35.582171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.582405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.582442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.582652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.582685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.582894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.582931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.583143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.583179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.583409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.583445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.583635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.583668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.583856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.583902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.584094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.584130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.584304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.584341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.584516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.584550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.584740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.584772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.584938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.584972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.585131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.585164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.585373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.585406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.585750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.585811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.586054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.586091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.586298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.586335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.586545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.586578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.586806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.586843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.587084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.587127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.587346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.587382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.587625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.587658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.587846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.587890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.588100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.588134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.588377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.588413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.588624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.588657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.588841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.588887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.589091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.589127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.589373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.589409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.523 [2024-07-13 22:20:35.589620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.523 [2024-07-13 22:20:35.589652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.523 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.589838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.589879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.590125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.590161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.590397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.590433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.590659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.590692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.590901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.590938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.591164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.591201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.591399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.591436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.591625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.591661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.591912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.591945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.592167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.592218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.592419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.592455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.592686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.592719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.592963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.593000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.593211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.593244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.593427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.593459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.593665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.593699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.593909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.593946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.594146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.594182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.594386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.594422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.594599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.594631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.594789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.594821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.594985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.595018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.595233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.595265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.595447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.595480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.595659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.595692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.595885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.595919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.596083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.596116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.596276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.596309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.596521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.596553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.596767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.596800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.596961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.524 [2024-07-13 22:20:35.596995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.524 qpair failed and we were unable to recover it. 00:37:16.524 [2024-07-13 22:20:35.597152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.597190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.597350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.597383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.597548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.597580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.597799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.597836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.598016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.598049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.598255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.598291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.598490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.598527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.598759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.598795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.598989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.599022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.599231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.599267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.599437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.599474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.599650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.599687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.599923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.599956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.600121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.600153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.600341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.600373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.600536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.600569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.600755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.600792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.600960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.600993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.601180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.601216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.601420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.601456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.601661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.601693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.601890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.601924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.602111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.602153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.602309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.602342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.602555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.602588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.602740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.602773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.602991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.603028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.603257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.603293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.603503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.603535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.603700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.603733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.603897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.603931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.604144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.604181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.525 qpair failed and we were unable to recover it. 00:37:16.525 [2024-07-13 22:20:35.604392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.525 [2024-07-13 22:20:35.604425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.604588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.604621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.604807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.604840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.605096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.605133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.605322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.605355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.605523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.605556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.605768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.605803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.606016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.606055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.606232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.606265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.606452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.606486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.606680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.606713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.606906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.606939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.607104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.607137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.607322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.607356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.607566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.607602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.607808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.607844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.608064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.608097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.608359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.608391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.608569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.608602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.608763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.608795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.608962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.608995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.609163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.609196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.609378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.609411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.609594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.609627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.609809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.609849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.610016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.610049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.610210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.610242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.610447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.610483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.610715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.610748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.610907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.610940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.611098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.611131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.611329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.611362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.611523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.611557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.526 [2024-07-13 22:20:35.611792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.526 [2024-07-13 22:20:35.611828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.526 qpair failed and we were unable to recover it. 00:37:16.527 [2024-07-13 22:20:35.612032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.527 [2024-07-13 22:20:35.612065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.527 qpair failed and we were unable to recover it. 00:37:16.527 [2024-07-13 22:20:35.612229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.527 [2024-07-13 22:20:35.612263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.527 qpair failed and we were unable to recover it. 00:37:16.527 [2024-07-13 22:20:35.612452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.527 [2024-07-13 22:20:35.612485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.527 qpair failed and we were unable to recover it. 00:37:16.527 [2024-07-13 22:20:35.612668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.527 [2024-07-13 22:20:35.612702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.527 qpair failed and we were unable to recover it. 00:37:16.527 [2024-07-13 22:20:35.612893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.527 [2024-07-13 22:20:35.612927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.527 qpair failed and we were unable to recover it. 00:37:16.527 [2024-07-13 22:20:35.613083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.527 [2024-07-13 22:20:35.613117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.527 qpair failed and we were unable to recover it. 00:37:16.527 [2024-07-13 22:20:35.613339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.527 [2024-07-13 22:20:35.613372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.527 qpair failed and we were unable to recover it. 00:37:16.527 [2024-07-13 22:20:35.613530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.527 [2024-07-13 22:20:35.613564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.527 qpair failed and we were unable to recover it. 00:37:16.527 [2024-07-13 22:20:35.613797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.527 [2024-07-13 22:20:35.613834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.527 qpair failed and we were unable to recover it. 00:37:16.527 [2024-07-13 22:20:35.614049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.527 [2024-07-13 22:20:35.614086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.527 qpair failed and we were unable to recover it. 00:37:16.527 [2024-07-13 22:20:35.614269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.527 [2024-07-13 22:20:35.614302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.527 qpair failed and we were unable to recover it. 00:37:16.527 [2024-07-13 22:20:35.614489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.527 [2024-07-13 22:20:35.614522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.527 qpair failed and we were unable to recover it. 00:37:16.527 [2024-07-13 22:20:35.614687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.527 [2024-07-13 22:20:35.614720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.527 qpair failed and we were unable to recover it. 00:37:16.527 [2024-07-13 22:20:35.614925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.527 [2024-07-13 22:20:35.614963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.527 qpair failed and we were unable to recover it. 00:37:16.527 [2024-07-13 22:20:35.615178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.527 [2024-07-13 22:20:35.615211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.527 qpair failed and we were unable to recover it. 00:37:16.527 [2024-07-13 22:20:35.615399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.527 [2024-07-13 22:20:35.615432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.527 qpair failed and we were unable to recover it. 00:37:16.527 [2024-07-13 22:20:35.615641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.527 [2024-07-13 22:20:35.615674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.527 qpair failed and we were unable to recover it. 00:37:16.527 [2024-07-13 22:20:35.615831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.527 [2024-07-13 22:20:35.615871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.527 qpair failed and we were unable to recover it. 00:37:16.527 [2024-07-13 22:20:35.616056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.527 [2024-07-13 22:20:35.616088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.527 qpair failed and we were unable to recover it. 00:37:16.527 [2024-07-13 22:20:35.616338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.527 [2024-07-13 22:20:35.616375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.527 qpair failed and we were unable to recover it. 00:37:16.527 [2024-07-13 22:20:35.616583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.527 [2024-07-13 22:20:35.616630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.527 qpair failed and we were unable to recover it. 00:37:16.527 [2024-07-13 22:20:35.616842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.527 [2024-07-13 22:20:35.616889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.527 qpair failed and we were unable to recover it. 00:37:16.527 [2024-07-13 22:20:35.617095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.617128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.617316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.617351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.617541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.617577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.617764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.617797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.617952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.617986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.618195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.618228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.618410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.618443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.618654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.618690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.618929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.618967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.619131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.619164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.619407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.619440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.619651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.619684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.619842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.619882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.620049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.620082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.620239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.620273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.620477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.620514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.620707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.620740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.620935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.620970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.621175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.621208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.621395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.621428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.621611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.621644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.621858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.621918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.622104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.622141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.622309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.622345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.622549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.622582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.622750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.622783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.622952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.622985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.623200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.623233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.623395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.623427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.623613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.623646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.623809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.623841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.528 [2024-07-13 22:20:35.624034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.528 [2024-07-13 22:20:35.624067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.528 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.624279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.624316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.624495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.624532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.624709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.624746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.624972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.625005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.625193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.625225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.625383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.625415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.625569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.625603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.625785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.625817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.626014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.626053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.626216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.626248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.626409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.626441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.626652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.626688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.626901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.626935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.627147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.627183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.627384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.627422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.627622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.627658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.627838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.627881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.628052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.628084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.628272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.628304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.628507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.628543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.628723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.628755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.628928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.628976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.629182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.629215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.629405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.629439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.629626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.629672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.629902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.629943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.630124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.630160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.630365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.630401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.630617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.630650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.630849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.630893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.631063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.631109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.529 [2024-07-13 22:20:35.631267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.529 [2024-07-13 22:20:35.631301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.529 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.631513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.631546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.631706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.631739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.631925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.631958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.632121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.632154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.632335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.632368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.632552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.632586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.632756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.632788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.633009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.633045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.633237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.633271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.633507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.633543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.633754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.633790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.634009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.634046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.634263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.634296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.634454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.634487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.634656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.634689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.634905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.634938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.635097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.635130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.635313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.635350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.635539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.635576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.635760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.635797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.636014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.636048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.636277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.636311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.636496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.636529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.636704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.636740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.636953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.636998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.637192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.637225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.637461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.637497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.637682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.637716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.637903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.530 [2024-07-13 22:20:35.637937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.530 qpair failed and we were unable to recover it. 00:37:16.530 [2024-07-13 22:20:35.638103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.638135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.638288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.638320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.638498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.638531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.638688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.638721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.638913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.638946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.639161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.639197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.639376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.639412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.639593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.639626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.639840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.639884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.640119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.640152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.640362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.640399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.640623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.640656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.640821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.640854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.641076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.641113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.641318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.641354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.641543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.641576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.641783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.641819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.642036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.642073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.642280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.642316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.642493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.642526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.642711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.642744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.642982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.643022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.643229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.643266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.643442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.643475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.643681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.643718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.643904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.643940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.644152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.644188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.644396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.644429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.644642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.644674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.644833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.644871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.645078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.531 [2024-07-13 22:20:35.645115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.531 qpair failed and we were unable to recover it. 00:37:16.531 [2024-07-13 22:20:35.645320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.645352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.532 [2024-07-13 22:20:35.645518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.645551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.532 [2024-07-13 22:20:35.645716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.645759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.532 [2024-07-13 22:20:35.645917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.645969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.532 [2024-07-13 22:20:35.646150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.646186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.532 [2024-07-13 22:20:35.646346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.646379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.532 [2024-07-13 22:20:35.646544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.646577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.532 [2024-07-13 22:20:35.646781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.646817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.532 [2024-07-13 22:20:35.647038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.647072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.532 [2024-07-13 22:20:35.647279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.647315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.532 [2024-07-13 22:20:35.647520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.647557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.532 [2024-07-13 22:20:35.647758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.647795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.532 [2024-07-13 22:20:35.648045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.648079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.532 [2024-07-13 22:20:35.648238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.648270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.532 [2024-07-13 22:20:35.648434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.648485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.532 [2024-07-13 22:20:35.648690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.648726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.532 [2024-07-13 22:20:35.648958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.648991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.532 [2024-07-13 22:20:35.649172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.649209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.532 [2024-07-13 22:20:35.649396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.649433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.532 [2024-07-13 22:20:35.649638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.649675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.532 [2024-07-13 22:20:35.649887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.649920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.532 [2024-07-13 22:20:35.650109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.650142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.532 [2024-07-13 22:20:35.650312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.650346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.532 [2024-07-13 22:20:35.650587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.650624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.532 [2024-07-13 22:20:35.650824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.650857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.532 [2024-07-13 22:20:35.651021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.651054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.532 [2024-07-13 22:20:35.651292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.651328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.532 [2024-07-13 22:20:35.651512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.532 [2024-07-13 22:20:35.651548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.532 qpair failed and we were unable to recover it. 00:37:16.533 [2024-07-13 22:20:35.651752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.533 [2024-07-13 22:20:35.651784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.533 qpair failed and we were unable to recover it. 00:37:16.533 [2024-07-13 22:20:35.651956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.533 [2024-07-13 22:20:35.651989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.533 qpair failed and we were unable to recover it. 00:37:16.533 [2024-07-13 22:20:35.652174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.533 [2024-07-13 22:20:35.652207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.533 qpair failed and we were unable to recover it. 00:37:16.533 [2024-07-13 22:20:35.652424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.533 [2024-07-13 22:20:35.652460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.533 qpair failed and we were unable to recover it. 00:37:16.533 [2024-07-13 22:20:35.652646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.533 [2024-07-13 22:20:35.652679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.533 qpair failed and we were unable to recover it. 00:37:16.533 [2024-07-13 22:20:35.652873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.533 [2024-07-13 22:20:35.652906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.533 qpair failed and we were unable to recover it. 00:37:16.533 [2024-07-13 22:20:35.653071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.533 [2024-07-13 22:20:35.653104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.533 qpair failed and we were unable to recover it. 00:37:16.533 [2024-07-13 22:20:35.653287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.533 [2024-07-13 22:20:35.653320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.533 qpair failed and we were unable to recover it. 00:37:16.533 [2024-07-13 22:20:35.653507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.533 [2024-07-13 22:20:35.653540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.533 qpair failed and we were unable to recover it. 00:37:16.533 [2024-07-13 22:20:35.653726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.533 [2024-07-13 22:20:35.653762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.533 qpair failed and we were unable to recover it. 00:37:16.533 [2024-07-13 22:20:35.653961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.533 [2024-07-13 22:20:35.653998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.533 qpair failed and we were unable to recover it. 00:37:16.533 [2024-07-13 22:20:35.654174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.533 [2024-07-13 22:20:35.654210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.533 qpair failed and we were unable to recover it. 00:37:16.533 [2024-07-13 22:20:35.654392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.533 [2024-07-13 22:20:35.654424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.533 qpair failed and we were unable to recover it. 00:37:16.533 [2024-07-13 22:20:35.654592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.533 [2024-07-13 22:20:35.654625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.533 qpair failed and we were unable to recover it. 00:37:16.533 [2024-07-13 22:20:35.654812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.533 [2024-07-13 22:20:35.654875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.533 qpair failed and we were unable to recover it. 00:37:16.533 [2024-07-13 22:20:35.655114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.533 [2024-07-13 22:20:35.655152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.533 qpair failed and we were unable to recover it. 00:37:16.533 [2024-07-13 22:20:35.655321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.533 [2024-07-13 22:20:35.655425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.533 qpair failed and we were unable to recover it. 00:37:16.533 [2024-07-13 22:20:35.655588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.533 [2024-07-13 22:20:35.655622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.533 qpair failed and we were unable to recover it. 00:37:16.533 [2024-07-13 22:20:35.655825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.533 [2024-07-13 22:20:35.655857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.533 qpair failed and we were unable to recover it. 00:37:16.533 [2024-07-13 22:20:35.656062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.533 [2024-07-13 22:20:35.656098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.533 qpair failed and we were unable to recover it. 00:37:16.533 [2024-07-13 22:20:35.656312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.533 [2024-07-13 22:20:35.656346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.533 qpair failed and we were unable to recover it. 00:37:16.533 [2024-07-13 22:20:35.656508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.533 [2024-07-13 22:20:35.656541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.533 qpair failed and we were unable to recover it. 00:37:16.533 [2024-07-13 22:20:35.656725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.533 [2024-07-13 22:20:35.656758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.533 qpair failed and we were unable to recover it. 00:37:16.533 [2024-07-13 22:20:35.656958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.533 [2024-07-13 22:20:35.656995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.533 qpair failed and we were unable to recover it. 00:37:16.533 [2024-07-13 22:20:35.657176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.657209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.657427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.657464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.657678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.657715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.657921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.657958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.658140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.658172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.658332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.658366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.658560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.658593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.658747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.658779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.659001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.659035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.659203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.659236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.659420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.659452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.659636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.659688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.659874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.659907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.660073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.660107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.660292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.660352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.660552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.660588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.660792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.660824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.661016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.661053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.661289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.661325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.661563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.661619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.661840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.661896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.662122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.662171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.662396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.662451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.662746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.662796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.663010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.663046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.663220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.663271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.663452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.663499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.663713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.663766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.664031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.664080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.664436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.534 [2024-07-13 22:20:35.664510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.534 qpair failed and we were unable to recover it. 00:37:16.534 [2024-07-13 22:20:35.664734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.664771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.664957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.665002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.665199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.665253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.665480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.665529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.665757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.665811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.666066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.666116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.666348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.666381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.666591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.666629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.666856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.666912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.667107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.667154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.667390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.667437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.667680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.667721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.667913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.667948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.668162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.668223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.668526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.668574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.668802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.668875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.669137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.669191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.669438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.669490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.669751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.669800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.670088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.670144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.670395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.670433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.670614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.670652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.670885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.670935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.671157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.671211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.671447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.671499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.671738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.671778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.672027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.672062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.672254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.672307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.672576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.672630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.672903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.672964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.535 [2024-07-13 22:20:35.673217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.535 [2024-07-13 22:20:35.673254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.535 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.673418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.673451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.673631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.673670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.673913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.673968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.674223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.674270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.674550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.674619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.674874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.674915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.675119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.675157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.675380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.675428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.675620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.675668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.675873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.675923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.676128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.676176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.676352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.676388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.676584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.676638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.676879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.676917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.677106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.677139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.677329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.677362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.677523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.677556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.677741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.677774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.677972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.678010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.678219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.678252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.678442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.678492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.678705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.678738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.678904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.678939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.679098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.679132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.679395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.679452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.679682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.679715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.679895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.679932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.680116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.680148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.680303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.680338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.536 [2024-07-13 22:20:35.680540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.536 [2024-07-13 22:20:35.680577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.536 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.680780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.680816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.681024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.681058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.681281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.681337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.681568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.681604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.681837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.681880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.682103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.682135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.682338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.682374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.682586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.682619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.682830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.682882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.683098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.683130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.683321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.683353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.683585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.683621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.683811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.683845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.684040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.684073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.684364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.684422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.684656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.684693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.684884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.684918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.685104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.685137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.685338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.685409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.685636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.685672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.685876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.685913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.686100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.686133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.686444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.686503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.686726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.686763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.686951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.686988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.687204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.687237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.687452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.687488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.687730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.537 [2024-07-13 22:20:35.687766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.537 qpair failed and we were unable to recover it. 00:37:16.537 [2024-07-13 22:20:35.687967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.538 [2024-07-13 22:20:35.688004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.538 qpair failed and we were unable to recover it. 00:37:16.538 [2024-07-13 22:20:35.688245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.538 [2024-07-13 22:20:35.688278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.538 qpair failed and we were unable to recover it. 00:37:16.538 [2024-07-13 22:20:35.688565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.538 [2024-07-13 22:20:35.688622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.538 qpair failed and we were unable to recover it. 00:37:16.538 [2024-07-13 22:20:35.688876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.538 [2024-07-13 22:20:35.688914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.538 qpair failed and we were unable to recover it. 00:37:16.538 [2024-07-13 22:20:35.689146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.538 [2024-07-13 22:20:35.689182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.538 qpair failed and we were unable to recover it. 00:37:16.538 [2024-07-13 22:20:35.689395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.538 [2024-07-13 22:20:35.689428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.538 qpair failed and we were unable to recover it. 00:37:16.538 [2024-07-13 22:20:35.689741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.538 [2024-07-13 22:20:35.689814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.538 qpair failed and we were unable to recover it. 00:37:16.538 [2024-07-13 22:20:35.690033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.538 [2024-07-13 22:20:35.690067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.538 qpair failed and we were unable to recover it. 00:37:16.538 [2024-07-13 22:20:35.690260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.538 [2024-07-13 22:20:35.690293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.538 qpair failed and we were unable to recover it. 00:37:16.538 [2024-07-13 22:20:35.690502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.538 [2024-07-13 22:20:35.690534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.538 qpair failed and we were unable to recover it. 00:37:16.538 [2024-07-13 22:20:35.690754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.538 [2024-07-13 22:20:35.690791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.538 qpair failed and we were unable to recover it. 00:37:16.538 [2024-07-13 22:20:35.691027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.538 [2024-07-13 22:20:35.691075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.538 qpair failed and we were unable to recover it. 00:37:16.538 [2024-07-13 22:20:35.691295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.538 [2024-07-13 22:20:35.691328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.538 qpair failed and we were unable to recover it. 00:37:16.538 [2024-07-13 22:20:35.691540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.538 [2024-07-13 22:20:35.691573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.538 qpair failed and we were unable to recover it. 00:37:16.538 [2024-07-13 22:20:35.691826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.538 [2024-07-13 22:20:35.691859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.538 qpair failed and we were unable to recover it. 00:37:16.538 [2024-07-13 22:20:35.692031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.538 [2024-07-13 22:20:35.692066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.538 qpair failed and we were unable to recover it. 00:37:16.538 [2024-07-13 22:20:35.692274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.538 [2024-07-13 22:20:35.692310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.538 qpair failed and we were unable to recover it. 00:37:16.538 [2024-07-13 22:20:35.692498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.538 [2024-07-13 22:20:35.692531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.538 qpair failed and we were unable to recover it. 00:37:16.538 [2024-07-13 22:20:35.692742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.538 [2024-07-13 22:20:35.692777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.538 qpair failed and we were unable to recover it. 00:37:16.538 [2024-07-13 22:20:35.692985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.538 [2024-07-13 22:20:35.693022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.538 qpair failed and we were unable to recover it. 00:37:16.538 [2024-07-13 22:20:35.693235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.538 [2024-07-13 22:20:35.693273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.538 qpair failed and we were unable to recover it. 00:37:16.538 [2024-07-13 22:20:35.693461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.538 [2024-07-13 22:20:35.693493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.538 qpair failed and we were unable to recover it. 00:37:16.538 [2024-07-13 22:20:35.693702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.538 [2024-07-13 22:20:35.693739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.538 qpair failed and we were unable to recover it. 00:37:16.538 [2024-07-13 22:20:35.693945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.538 [2024-07-13 22:20:35.693978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.538 qpair failed and we were unable to recover it. 00:37:16.538 [2024-07-13 22:20:35.694172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.538 [2024-07-13 22:20:35.694204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.538 qpair failed and we were unable to recover it. 00:37:16.538 [2024-07-13 22:20:35.694439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.538 [2024-07-13 22:20:35.694472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.694677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.694713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.694891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.694927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.695144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.695177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.695357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.695390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.695732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.695789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.696034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.696068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.696366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.696428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.696669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.696702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.696955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.696992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.697193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.697229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.697459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.697496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.697700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.697733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.697897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.697931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.698166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.698202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.698442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.698475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.698627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.698660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.698875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.698912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.699113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.699149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.699385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.699421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.699655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.699687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.699892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.699926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.700140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.700177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.700418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.700451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.700648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.700680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.700909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.700942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.701100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.701153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.701367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.701409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.701638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.701671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.539 qpair failed and we were unable to recover it. 00:37:16.539 [2024-07-13 22:20:35.701914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.539 [2024-07-13 22:20:35.701950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.702181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.702217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.702421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.702457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.702646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.702679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.702871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.702904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.703141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.703174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.703357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.703394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.703620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.703653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.703840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.703895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.704112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.704148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.704354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.704390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.704624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.704657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.704864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.704909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.705155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.705187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.705401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.705451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.705664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.705696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.705906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.705943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.706190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.706222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.706434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.706484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.706694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.706727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.706949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.706986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.707192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.707262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.707491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.707527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.707733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.707765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.708041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.708098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.708286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.708325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.708527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.708563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.708743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.708775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.709056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.709117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.709331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.709367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.709569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.709605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.709813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.709845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.540 [2024-07-13 22:20:35.710030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.540 [2024-07-13 22:20:35.710066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.540 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.710287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.710321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.710534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.710570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.710760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.710794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.711003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.711099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.711308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.711345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.711554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.711587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.711799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.711831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.712027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.712064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.712302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.712339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.712544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.712580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.712770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.712803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.713015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.713066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.713274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.713311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.713512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.713553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.713728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.713760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.713948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.713982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.714142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.714176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.714361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.714394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.714549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.714581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.714765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.714803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.715005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.715039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.715247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.715297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.715540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.715572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.715775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.715812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.716027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.716060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.716252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.716288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.716470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.716504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.716699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.716737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.716942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.716979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.717201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.717234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.717419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.717453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.541 [2024-07-13 22:20:35.717679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.541 [2024-07-13 22:20:35.717713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.541 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.717929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.717967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.718152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.718186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.718406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.718439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.718691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.718724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.718938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.718975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.719195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.719232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.719439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.719472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.719634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.719668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.719910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.719947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.720130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.720167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.720364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.720397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.720717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.720776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.721017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.721054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.721303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.721336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.721548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.721581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.721798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.721831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.722023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.722057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.722246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.722283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.722490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.722522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.722782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.722816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.723011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.723055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.723281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.723318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.723484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.723517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.723746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.723782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.723960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.723998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.724203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.724240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.724475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.724507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.724755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.724792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.724997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.725034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.725243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.725281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.725485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.542 [2024-07-13 22:20:35.725518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.542 qpair failed and we were unable to recover it. 00:37:16.542 [2024-07-13 22:20:35.725744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.725776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.726009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.726046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.726282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.726315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.726502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.726535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.726701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.726734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.726966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.727003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.727209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.727241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.727431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.727463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.727681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.727714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.727910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.727960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.728173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.728209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.728419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.728453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.728666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.728703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.728881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.728919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.729125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.729162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.729397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.729430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.729639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.729675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.729897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.729935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.730125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.730162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.730363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.730396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.730636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.730672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.730891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.730925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.731112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.731145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.731303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.731335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.731618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.731675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.731902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.731939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.732124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.732162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.732373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.732406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.732650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.732706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.732919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.732956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.733158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.733200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.733422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.733454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.733644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.543 [2024-07-13 22:20:35.733677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.543 qpair failed and we were unable to recover it. 00:37:16.543 [2024-07-13 22:20:35.733860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.544 [2024-07-13 22:20:35.733902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.544 qpair failed and we were unable to recover it. 00:37:16.544 [2024-07-13 22:20:35.734091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.544 [2024-07-13 22:20:35.734127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.544 qpair failed and we were unable to recover it. 00:37:16.544 [2024-07-13 22:20:35.734364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.544 [2024-07-13 22:20:35.734397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.544 qpair failed and we were unable to recover it. 00:37:16.544 [2024-07-13 22:20:35.734719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.544 [2024-07-13 22:20:35.734785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.544 qpair failed and we were unable to recover it. 00:37:16.544 [2024-07-13 22:20:35.735012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.544 [2024-07-13 22:20:35.735045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.544 qpair failed and we were unable to recover it. 00:37:16.544 [2024-07-13 22:20:35.735230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.544 [2024-07-13 22:20:35.735266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.544 qpair failed and we were unable to recover it. 00:37:16.544 [2024-07-13 22:20:35.735500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.544 [2024-07-13 22:20:35.735533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.544 qpair failed and we were unable to recover it. 00:37:16.544 [2024-07-13 22:20:35.735719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.544 [2024-07-13 22:20:35.735757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.544 qpair failed and we were unable to recover it. 00:37:16.544 [2024-07-13 22:20:35.735940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.544 [2024-07-13 22:20:35.735978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.544 qpair failed and we were unable to recover it. 00:37:16.544 [2024-07-13 22:20:35.736164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.544 [2024-07-13 22:20:35.736200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.544 qpair failed and we were unable to recover it. 00:37:16.544 [2024-07-13 22:20:35.736430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.544 [2024-07-13 22:20:35.736463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.544 qpair failed and we were unable to recover it. 00:37:16.544 [2024-07-13 22:20:35.736714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.544 [2024-07-13 22:20:35.736747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.544 qpair failed and we were unable to recover it. 00:37:16.544 [2024-07-13 22:20:35.736965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.544 [2024-07-13 22:20:35.737002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.544 qpair failed and we were unable to recover it. 00:37:16.544 [2024-07-13 22:20:35.737257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.544 [2024-07-13 22:20:35.737290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.544 qpair failed and we were unable to recover it. 00:37:16.544 [2024-07-13 22:20:35.737474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.544 [2024-07-13 22:20:35.737507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.544 qpair failed and we were unable to recover it. 00:37:16.544 [2024-07-13 22:20:35.737714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.544 [2024-07-13 22:20:35.737750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.544 qpair failed and we were unable to recover it. 00:37:16.544 [2024-07-13 22:20:35.737961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.544 [2024-07-13 22:20:35.737998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.544 qpair failed and we were unable to recover it. 00:37:16.544 [2024-07-13 22:20:35.738214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.544 [2024-07-13 22:20:35.738247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.544 qpair failed and we were unable to recover it. 00:37:16.544 [2024-07-13 22:20:35.738431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.544 [2024-07-13 22:20:35.738464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.544 qpair failed and we were unable to recover it. 00:37:16.544 [2024-07-13 22:20:35.738653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.544 [2024-07-13 22:20:35.738686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.544 qpair failed and we were unable to recover it. 00:37:16.544 [2024-07-13 22:20:35.738931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.544 [2024-07-13 22:20:35.738979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.544 qpair failed and we were unable to recover it. 00:37:16.544 [2024-07-13 22:20:35.739209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.544 [2024-07-13 22:20:35.739245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.544 qpair failed and we were unable to recover it. 00:37:16.544 [2024-07-13 22:20:35.739436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.544 [2024-07-13 22:20:35.739469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.544 qpair failed and we were unable to recover it. 00:37:16.544 [2024-07-13 22:20:35.739653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.739690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.739930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.739967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.740179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.740217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.740426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.740459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.740648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.740681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.740895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.740932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.741179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.741211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.741400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.741432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.741720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.741757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.741974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.742008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.742164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.742196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.742408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.742440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.742642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.742678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.742882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.742919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.743125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.743167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.743411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.743443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.743650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.743683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.743933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.743971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.744182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.744219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.744429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.744462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.744623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.744658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.744838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.744881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.745083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.745120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.745301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.745335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.745536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.745568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.745752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.745786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.746002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.746039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.746250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.746283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.746676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.746743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.746969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.747006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.747211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.747247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.747460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.545 [2024-07-13 22:20:35.747493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.545 qpair failed and we were unable to recover it. 00:37:16.545 [2024-07-13 22:20:35.747676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.747712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.747952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.747989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.748191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.748223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.748384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.748417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.748684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.748739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.748956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.748993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.749183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.749219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.749427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.749459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.749640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.749676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.749893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.749932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.750148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.750185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.750392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.750425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.750769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.750824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.751081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.751114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.751338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.751371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.751561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.751594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.751775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.751813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.752033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.752066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.752239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.752276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.752462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.752495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.752670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.752708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.752943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.752981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.753188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.753230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.753442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.753476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.753666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.753699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.753924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.753961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.754169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.754202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.754387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.754420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.754738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.754800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.755012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.755062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.755249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.755286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.546 [2024-07-13 22:20:35.755490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.546 [2024-07-13 22:20:35.755523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.546 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.755703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.755739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.755938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.755974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.756149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.756186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.756398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.756431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.756588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.756621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.756806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.756839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.757099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.757136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.757335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.757369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.757647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.757708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.757976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.758013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.758225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.758258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.758443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.758475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.758710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.758746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.758962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.758996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.759207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.759245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.759474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.759507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.759742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.759778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.759994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.760032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.760260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.760293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.760483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.760515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.760700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.760736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.760937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.760974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.761175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.761212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.761402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.761435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.761622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.761654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.761884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.761922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.762166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.762198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.762385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.762418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.762725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.762786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.763001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.763034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.763240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.763282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.547 qpair failed and we were unable to recover it. 00:37:16.547 [2024-07-13 22:20:35.763467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.547 [2024-07-13 22:20:35.763500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.763747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.763780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.763940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.763984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.764279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.764315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.764541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.764573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.764837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.764877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.765125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.765161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.765347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.765383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.765573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.765606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.765890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.765927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.766158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.766194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.766435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.766468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.766737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.766770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.766994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.767031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.767245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.767278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.767439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.767492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.767706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.767739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.768025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.768084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.768267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.768300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.768504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.768541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.768779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.768811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.769064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.769101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.769281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.769314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.769500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.769533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.769697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.769729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.770033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.770101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.770342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.770379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.770548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.770584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.770860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.770901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.771160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.771197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.771428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.548 [2024-07-13 22:20:35.771475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.548 qpair failed and we were unable to recover it. 00:37:16.548 [2024-07-13 22:20:35.771684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.771721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.771909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.771942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.772187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.772248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.772459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.772495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.772704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.772740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.772980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.773013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.773222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.773255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.773484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.773520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.773731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.773772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.773962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.773995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.774173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.774209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.774441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.774477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.774695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.774728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.774921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.774955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.775325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.775400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.775578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.775615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.775814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.775851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.776069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.776102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.776313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.776351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.776574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.776609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.776772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.776805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.776993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.777026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.777278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.777315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.777551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.777587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.777819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.777852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.778051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.778083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.778372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.778430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.778634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.778670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.778882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.778915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.779096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.779129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.549 [2024-07-13 22:20:35.779391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.549 [2024-07-13 22:20:35.779463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.549 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.779700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.550 [2024-07-13 22:20:35.779736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.550 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.779940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.550 [2024-07-13 22:20:35.779977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.550 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.780212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.550 [2024-07-13 22:20:35.780245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.550 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.780615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.550 [2024-07-13 22:20:35.780685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.550 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.780916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.550 [2024-07-13 22:20:35.780953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.550 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.781142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.550 [2024-07-13 22:20:35.781179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.550 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.781411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.550 [2024-07-13 22:20:35.781444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.550 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.781672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.550 [2024-07-13 22:20:35.781729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.550 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.781946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.550 [2024-07-13 22:20:35.781978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.550 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.782159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.550 [2024-07-13 22:20:35.782197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.550 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.782404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.550 [2024-07-13 22:20:35.782437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.550 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.782642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.550 [2024-07-13 22:20:35.782679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.550 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.782906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.550 [2024-07-13 22:20:35.782943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.550 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.783113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.550 [2024-07-13 22:20:35.783150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.550 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.783356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.550 [2024-07-13 22:20:35.783389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.550 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.783658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.550 [2024-07-13 22:20:35.783725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.550 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.783937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.550 [2024-07-13 22:20:35.783974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.550 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.784175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.550 [2024-07-13 22:20:35.784218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.550 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.784434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.550 [2024-07-13 22:20:35.784466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.550 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.784626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.550 [2024-07-13 22:20:35.784659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.550 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.784858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.550 [2024-07-13 22:20:35.784904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.550 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.785111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.550 [2024-07-13 22:20:35.785147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.550 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.785337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.550 [2024-07-13 22:20:35.785370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.550 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.785558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.550 [2024-07-13 22:20:35.785591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.550 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.785814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.550 [2024-07-13 22:20:35.785851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.550 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.786076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.550 [2024-07-13 22:20:35.786112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.550 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.786349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.550 [2024-07-13 22:20:35.786382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.550 qpair failed and we were unable to recover it. 00:37:16.550 [2024-07-13 22:20:35.786617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.786654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.786897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.786930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.787108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.787144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.787354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.787386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.787611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.787645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.787800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.787843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.788039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.788090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.788323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.788356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.788599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.788631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.788822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.788854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.789030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.789063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.789278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.789310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.789487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.789520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.789704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.789742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.789930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.789966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.790197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.790229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.790532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.790593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.790803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.790838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.791034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.791072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.791261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.791294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.791480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.791513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.791734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.791770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.791971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.792008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.792212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.792245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.792433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.792465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.792698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.792735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.792942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.792976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.793193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.793225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.793461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.793497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.793678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.793714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.793933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.793970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.551 [2024-07-13 22:20:35.794137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.551 [2024-07-13 22:20:35.794169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.551 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.794336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.794368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.794533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.794566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.794757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.794790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.795045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.795078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.795284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.795320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.795549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.795582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.795789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.795826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.796024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.796058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.796264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.796301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.796514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.796546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.796762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.796799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.797032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.797066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.797260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.797296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.797494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.797531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.797772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.797809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.798059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.798093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.798357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.798425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.798642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.798678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.798881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.798914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.799126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.799158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.799534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.799602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.799827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.799863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.800104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.800140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.800349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.800382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.800543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.800576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.800811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.800852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.801070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.801106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.801318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.801351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.801693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.801752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.801955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.801992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.802199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.802235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.802448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.552 [2024-07-13 22:20:35.802481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.552 qpair failed and we were unable to recover it. 00:37:16.552 [2024-07-13 22:20:35.802791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.802860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.803080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.803116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.803283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.803320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.803596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.803629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.803860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.803904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.804134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.804182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.804428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.804465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.804663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.804696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.804909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.804946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.805153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.805189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.805416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.805448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.805637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.805670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.805888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.805925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.806126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.806163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.806341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.806378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.806563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.806596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.806881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.806919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.807151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.807187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.807393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.807430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.807672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.807705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.807946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.807983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.808179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.808216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.808419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.808455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.808664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.808697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.808889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.808926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.809170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.809203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.809388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.809420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.809611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.809644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.809858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.809904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.810102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.553 [2024-07-13 22:20:35.810139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.553 qpair failed and we were unable to recover it. 00:37:16.553 [2024-07-13 22:20:35.810323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.810359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.810540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.810572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.810805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.810841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.811043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.811085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.811313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.811349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.811555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.811588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.811873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.811910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.812114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.812150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.812322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.812358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.812558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.812591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.812784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.812816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.812981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.813014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.813250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.813287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.813485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.813518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.813724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.813760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.813966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.814003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.814185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.814222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.814405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.814439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.814628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.814661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.814847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.814891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.815122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.815159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.815394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.815426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.815691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.815728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.815944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.815977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.816180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.816216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.816405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.816437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.816705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.816738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.816945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.816982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.817190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.817226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.554 [2024-07-13 22:20:35.817434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.554 [2024-07-13 22:20:35.817467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.554 qpair failed and we were unable to recover it. 00:37:16.555 [2024-07-13 22:20:35.817636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.555 [2024-07-13 22:20:35.817670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.555 qpair failed and we were unable to recover it. 00:37:16.555 [2024-07-13 22:20:35.817910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.555 [2024-07-13 22:20:35.817948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.555 qpair failed and we were unable to recover it. 00:37:16.555 [2024-07-13 22:20:35.818161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.555 [2024-07-13 22:20:35.818193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.555 qpair failed and we were unable to recover it. 00:37:16.555 [2024-07-13 22:20:35.818410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.555 [2024-07-13 22:20:35.818443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.555 qpair failed and we were unable to recover it. 00:37:16.555 [2024-07-13 22:20:35.818665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.555 [2024-07-13 22:20:35.818698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.555 qpair failed and we were unable to recover it. 00:37:16.555 [2024-07-13 22:20:35.818885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.555 [2024-07-13 22:20:35.818918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.555 qpair failed and we were unable to recover it. 00:37:16.555 [2024-07-13 22:20:35.819164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.555 [2024-07-13 22:20:35.819201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.555 qpair failed and we were unable to recover it. 00:37:16.555 [2024-07-13 22:20:35.819415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.555 [2024-07-13 22:20:35.819449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.555 qpair failed and we were unable to recover it. 00:37:16.555 [2024-07-13 22:20:35.819756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.555 [2024-07-13 22:20:35.819789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.555 qpair failed and we were unable to recover it. 00:37:16.555 [2024-07-13 22:20:35.819990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.555 [2024-07-13 22:20:35.820033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.555 qpair failed and we were unable to recover it. 00:37:16.555 [2024-07-13 22:20:35.820251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.555 [2024-07-13 22:20:35.820303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.555 qpair failed and we were unable to recover it. 00:37:16.555 [2024-07-13 22:20:35.820482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.555 [2024-07-13 22:20:35.820514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.555 qpair failed and we were unable to recover it. 00:37:16.555 [2024-07-13 22:20:35.820748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.555 [2024-07-13 22:20:35.820784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.555 qpair failed and we were unable to recover it. 00:37:16.555 [2024-07-13 22:20:35.821069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.555 [2024-07-13 22:20:35.821111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.555 qpair failed and we were unable to recover it. 00:37:16.555 [2024-07-13 22:20:35.821320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.555 [2024-07-13 22:20:35.821357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.555 qpair failed and we were unable to recover it. 00:37:16.555 [2024-07-13 22:20:35.821564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.555 [2024-07-13 22:20:35.821597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.555 qpair failed and we were unable to recover it. 00:37:16.555 [2024-07-13 22:20:35.821808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.555 [2024-07-13 22:20:35.821844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.555 qpair failed and we were unable to recover it. 00:37:16.555 [2024-07-13 22:20:35.822082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.555 [2024-07-13 22:20:35.822119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.555 qpair failed and we were unable to recover it. 00:37:16.555 [2024-07-13 22:20:35.822356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.555 [2024-07-13 22:20:35.822393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.555 qpair failed and we were unable to recover it. 00:37:16.555 [2024-07-13 22:20:35.822603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.555 [2024-07-13 22:20:35.822636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.555 qpair failed and we were unable to recover it. 00:37:16.555 [2024-07-13 22:20:35.822896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.555 [2024-07-13 22:20:35.822933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.555 qpair failed and we were unable to recover it. 00:37:16.555 [2024-07-13 22:20:35.823140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.555 [2024-07-13 22:20:35.823176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.555 qpair failed and we were unable to recover it. 00:37:16.555 [2024-07-13 22:20:35.823382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.555 [2024-07-13 22:20:35.823418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.555 qpair failed and we were unable to recover it. 00:37:16.555 [2024-07-13 22:20:35.823629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.555 [2024-07-13 22:20:35.823663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.555 qpair failed and we were unable to recover it. 00:37:16.555 [2024-07-13 22:20:35.823852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.555 [2024-07-13 22:20:35.823897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.824101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.824137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.824366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.824402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.824625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.824658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.824877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.824914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.825149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.825185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.825382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.825419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.825622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.825655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.825896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.825930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.826117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.826149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.826348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.826381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.826546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.826579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.826806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.826842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.827021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.827057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.827269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.827305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.827511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.827544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.827761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.827798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.828008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.828045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.828241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.828278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.828477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.828510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.828719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.828755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.828984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.829021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.829195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.829231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.829439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.829472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.829637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.829671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.829886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.829940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.830151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.830189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.830431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.830463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.830771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.830824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.831065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.831103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.831333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.831369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.831579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.831613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.831884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.831921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.556 [2024-07-13 22:20:35.832128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.556 [2024-07-13 22:20:35.832165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.556 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.832401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.832437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.832627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.832659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.832848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.832889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.833117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.833153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.833391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.833424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.833639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.833671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.833889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.833926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.834140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.834176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.834350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.834386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.834620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.834653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.834834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.834877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.835063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.835100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.835331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.835368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.835575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.835607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.835795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.835832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.836076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.836119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.836364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.836397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.836554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.836587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.836815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.836852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.837101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.837133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.837345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.837382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.837581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.837614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.837804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.837836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.838032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.838066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.838273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.838309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.838552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.838585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.838773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.838810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.839033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.839066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.839256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.839289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.839469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.839502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.839687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.839723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.557 [2024-07-13 22:20:35.839900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.557 [2024-07-13 22:20:35.839937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.557 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.840145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.840181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.840392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.840424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.840602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.840634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.840851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.840905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.841092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.841128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.841367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.841399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.841586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.841618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.841805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.841838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.842039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.842090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.842274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.842307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.842518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.842554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.842791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.842823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.843013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.843046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.843210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.843243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.843487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.843541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.843790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.843823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.843988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.844021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.844246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.844279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.844466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.844498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.844664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.844715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.844903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.844937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.845126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.845159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.845316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.845349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.845534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.845567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.845755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.845788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.845999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.846032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.846213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.846249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.846452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.846487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.846660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.846696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.846891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.558 [2024-07-13 22:20:35.846924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.558 qpair failed and we were unable to recover it. 00:37:16.558 [2024-07-13 22:20:35.847123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.847156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.847320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.847353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.847542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.847574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.847764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.847797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.847960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.847993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.848181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.848213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.848421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.848457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.848660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.848694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.848882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.848915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.849068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.849102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.849264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.849297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.849463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.849495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.849644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.849677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.849855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.849901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.850113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.850146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.850341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.850374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.850546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.850580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.850738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.850781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.850984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.851018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.851204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.851236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.851410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.851446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.851678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.851715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.851909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.851946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.852157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.852190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.852360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.852392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.852580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.852630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.852834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.852879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.853068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.853100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.853289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.853323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.853544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.853578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.853792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.853825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.854014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.559 [2024-07-13 22:20:35.854047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.559 qpair failed and we were unable to recover it. 00:37:16.559 [2024-07-13 22:20:35.854209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.854242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.854401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.854434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.854594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.854645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.854843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.854884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.855048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.855081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.855259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.855296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.855511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.855543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.855706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.855738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.855928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.855967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.856171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.856208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.856379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.856415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.856608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.856641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.856819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.856852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.857087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.857124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.857296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.857333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.857517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.857549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.857762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.857794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.857985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.858018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.858219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.858252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.858449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.858482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.858695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.858727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.858892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.858930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.859091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.859123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.859288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.859320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.859529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.859571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.859750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.859787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.859998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.860032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.860185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.860218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.860383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.860415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.860627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.860663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.860881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.860917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.861116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.861148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.861352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.861388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.861625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.560 [2024-07-13 22:20:35.861662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.560 qpair failed and we were unable to recover it. 00:37:16.560 [2024-07-13 22:20:35.861874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.561 [2024-07-13 22:20:35.861911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.561 qpair failed and we were unable to recover it. 00:37:16.561 [2024-07-13 22:20:35.862123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.561 [2024-07-13 22:20:35.862156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.561 qpair failed and we were unable to recover it. 00:37:16.561 [2024-07-13 22:20:35.862318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.561 [2024-07-13 22:20:35.862351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.561 qpair failed and we were unable to recover it. 00:37:16.561 [2024-07-13 22:20:35.862534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.561 [2024-07-13 22:20:35.862566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.561 qpair failed and we were unable to recover it. 00:37:16.561 [2024-07-13 22:20:35.862795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.561 [2024-07-13 22:20:35.862829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.561 qpair failed and we were unable to recover it. 00:37:16.561 [2024-07-13 22:20:35.863002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.561 [2024-07-13 22:20:35.863035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.561 qpair failed and we were unable to recover it. 00:37:16.561 [2024-07-13 22:20:35.863199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.561 [2024-07-13 22:20:35.863249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.561 qpair failed and we were unable to recover it. 00:37:16.561 [2024-07-13 22:20:35.863455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.561 [2024-07-13 22:20:35.863492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.561 qpair failed and we were unable to recover it. 00:37:16.561 [2024-07-13 22:20:35.863702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.561 [2024-07-13 22:20:35.863739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.561 qpair failed and we were unable to recover it. 00:37:16.561 [2024-07-13 22:20:35.863982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.561 [2024-07-13 22:20:35.864015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.561 qpair failed and we were unable to recover it. 00:37:16.561 [2024-07-13 22:20:35.864242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.561 [2024-07-13 22:20:35.864279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.561 qpair failed and we were unable to recover it. 00:37:16.561 [2024-07-13 22:20:35.864472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.561 [2024-07-13 22:20:35.864506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.561 qpair failed and we were unable to recover it. 00:37:16.561 [2024-07-13 22:20:35.864698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.561 [2024-07-13 22:20:35.864730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.561 qpair failed and we were unable to recover it. 00:37:16.561 [2024-07-13 22:20:35.864909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.561 [2024-07-13 22:20:35.864942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.561 qpair failed and we were unable to recover it. 00:37:16.561 [2024-07-13 22:20:35.865190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.561 [2024-07-13 22:20:35.865228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.561 qpair failed and we were unable to recover it. 00:37:16.561 [2024-07-13 22:20:35.865435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.561 [2024-07-13 22:20:35.865482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.561 qpair failed and we were unable to recover it. 00:37:16.561 [2024-07-13 22:20:35.865689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.561 [2024-07-13 22:20:35.865726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.561 qpair failed and we were unable to recover it. 00:37:16.561 [2024-07-13 22:20:35.865955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.561 [2024-07-13 22:20:35.865988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.561 qpair failed and we were unable to recover it. 00:37:16.561 [2024-07-13 22:20:35.866148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.561 [2024-07-13 22:20:35.866181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.561 qpair failed and we were unable to recover it. 00:37:16.561 [2024-07-13 22:20:35.866386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.561 [2024-07-13 22:20:35.866422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.561 qpair failed and we were unable to recover it. 00:37:16.561 [2024-07-13 22:20:35.866623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.561 [2024-07-13 22:20:35.866659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.561 qpair failed and we were unable to recover it. 00:37:16.561 [2024-07-13 22:20:35.866872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.561 [2024-07-13 22:20:35.866905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.561 qpair failed and we were unable to recover it. 00:37:16.561 [2024-07-13 22:20:35.867067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.561 [2024-07-13 22:20:35.867099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.561 qpair failed and we were unable to recover it. 00:37:16.561 [2024-07-13 22:20:35.867254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.561 [2024-07-13 22:20:35.867288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.561 qpair failed and we were unable to recover it. 00:37:16.561 [2024-07-13 22:20:35.867502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.561 [2024-07-13 22:20:35.867535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.867757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.867793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.868021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.868054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.868245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.868301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.868540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.868577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.868762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.868795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.868984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.869017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.869179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.869212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.869422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.869455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.869619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.869652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.869836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.869879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.870039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.870073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.870290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.870322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.870506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.870538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.870695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.870728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.870946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.870983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.871184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.871220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.871402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.871435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.871619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.871656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.871880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.871913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.872081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.872113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.872327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.872359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.872517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.872549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.872716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.872748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.872958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.872991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.873179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.873211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.873405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.873438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.873597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.873629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.873789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.873838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.874060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.874094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.562 [2024-07-13 22:20:35.874312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.562 [2024-07-13 22:20:35.874346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.562 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.874536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.874568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.874732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.874766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.874971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.875004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.875211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.875248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.875428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.875466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.875676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.875712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.875901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.875934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.876154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.876187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.876401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.876437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.876676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.876712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.876967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.877000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.877186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.877220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.877411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.877448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.877612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.877645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.877807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.877839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.878005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.878038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.878246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.878282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.878482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.878518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.878701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.878734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.878892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.878924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.879091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.879124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.879284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.879318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.879478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.879511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.879706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.879739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.879926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.879973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.880156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.880189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.880874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.880912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.881101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.881138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.881321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.881357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.881531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.881568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.881772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.563 [2024-07-13 22:20:35.881804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.563 qpair failed and we were unable to recover it. 00:37:16.563 [2024-07-13 22:20:35.881983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.882021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.882252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.882288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.882482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.882514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.882669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.882702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.882892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.882925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.883110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.883142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.883350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.883388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.883622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.883656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.883829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.883863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.884065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.884099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.884287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.884320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.884511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.884544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.884703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.884736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.884911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.884944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.885116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.885153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.885334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.885367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.885527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.885560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.885715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.885749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.885970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.886004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.886232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.886265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.886451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.886483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.886641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.886679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.886897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.886935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.887134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.887169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.887373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.887410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.887618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.887669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.887859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.887899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.888096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.888130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.888300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.888333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.888522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.888571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.888780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.888817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.889017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.889051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.889208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.564 [2024-07-13 22:20:35.889248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.564 qpair failed and we were unable to recover it. 00:37:16.564 [2024-07-13 22:20:35.889433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.565 [2024-07-13 22:20:35.889466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.565 qpair failed and we were unable to recover it. 00:37:16.565 [2024-07-13 22:20:35.889625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.565 [2024-07-13 22:20:35.889658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.565 qpair failed and we were unable to recover it. 00:37:16.565 [2024-07-13 22:20:35.889853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.565 [2024-07-13 22:20:35.889894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.565 qpair failed and we were unable to recover it. 00:37:16.565 [2024-07-13 22:20:35.890054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.565 [2024-07-13 22:20:35.890086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.565 qpair failed and we were unable to recover it. 00:37:16.565 [2024-07-13 22:20:35.890349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.565 [2024-07-13 22:20:35.890400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.565 qpair failed and we were unable to recover it. 00:37:16.565 [2024-07-13 22:20:35.890626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.565 [2024-07-13 22:20:35.890679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.565 qpair failed and we were unable to recover it. 00:37:16.565 [2024-07-13 22:20:35.890957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.565 [2024-07-13 22:20:35.891008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.565 qpair failed and we were unable to recover it. 00:37:16.565 [2024-07-13 22:20:35.891260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.565 [2024-07-13 22:20:35.891311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.565 qpair failed and we were unable to recover it. 00:37:16.565 [2024-07-13 22:20:35.891563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.565 [2024-07-13 22:20:35.891600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.565 qpair failed and we were unable to recover it. 00:37:16.565 [2024-07-13 22:20:35.891766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.565 [2024-07-13 22:20:35.891800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.565 qpair failed and we were unable to recover it. 00:37:16.565 [2024-07-13 22:20:35.892010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.565 [2024-07-13 22:20:35.892045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.565 qpair failed and we were unable to recover it. 00:37:16.565 [2024-07-13 22:20:35.892283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.565 [2024-07-13 22:20:35.892336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.565 qpair failed and we were unable to recover it. 00:37:16.565 [2024-07-13 22:20:35.892590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.565 [2024-07-13 22:20:35.892640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.565 qpair failed and we were unable to recover it. 00:37:16.565 [2024-07-13 22:20:35.892862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.565 [2024-07-13 22:20:35.892926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.565 qpair failed and we were unable to recover it. 00:37:16.565 [2024-07-13 22:20:35.893179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.565 [2024-07-13 22:20:35.893226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.565 qpair failed and we were unable to recover it. 00:37:16.565 [2024-07-13 22:20:35.893454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.893489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.893794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.893856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.894124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.894171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.894391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.894436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.894711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.894760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.894971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.895005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.895196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.895240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.895434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.895482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.895688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.895735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.896029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.896078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.896342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.896395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.896645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.896681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.896882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.896924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.897148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.897204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.897432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.897481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.897757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.897808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.898088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.898136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.898332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.898399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.898672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.898727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.899028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.899080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.899317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.899352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.899538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.899571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.899789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.899837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.900060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.900108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.900416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.900485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.900739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.900775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.900949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.900983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.901187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.901219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.843 qpair failed and we were unable to recover it. 00:37:16.843 [2024-07-13 22:20:35.901449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.843 [2024-07-13 22:20:35.901496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.901692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.901740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.901991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.902062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.902279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.902314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.902504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.902538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.902707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.902740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.902937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.902971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.903144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.903176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.903491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.903549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.903757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.903794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.904013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.904048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.904222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.904253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.904425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.904457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.904647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.904678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.904918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.904955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.905156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.905188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.905407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.905444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.905652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.905687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.905897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.905940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.906157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.906191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.906473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.906527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.906742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.906775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.906942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.906975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.907201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.907234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.907592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.907649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.907860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.907930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.908120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.908154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.908387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.908420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.908701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.908759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.908973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.909011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.909183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.909219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.909429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.909461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.909743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.909780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.910000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.910037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.910210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.910246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.910424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.910456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.910672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.910710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.910950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.910985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.911174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.911210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.911422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.911454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.911670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.844 [2024-07-13 22:20:35.911708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.844 qpair failed and we were unable to recover it. 00:37:16.844 [2024-07-13 22:20:35.911920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.911953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.912146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.912194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.912401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.912434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.912604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.912637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.912794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.912826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.912989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.913022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.913174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.913206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.913416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.913452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.913679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.913716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.913933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.913972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.914130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.914162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.914347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.914383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.914705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.914762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.915001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.915034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.915199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.915231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.915513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.915570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.915790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.915827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.916053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.916086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.916248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.916280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.916497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.916533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.916735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.916772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.916991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.917024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.917227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.917260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.917550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.917585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.917791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.917831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.918060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.918103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.918320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.918353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.918539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.918572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.918752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.918785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.918990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.919027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.919210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.919243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.919511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.919570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.919811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.919844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.920062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.920120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.920314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.920348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.920515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.920547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.920704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.920735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.920948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.920998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.921177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.921209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.921428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.921464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.845 qpair failed and we were unable to recover it. 00:37:16.845 [2024-07-13 22:20:35.921671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.845 [2024-07-13 22:20:35.921707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.921913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.921949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.922160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.922192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.922457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.922514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.922749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.922782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.922945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.922979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.923140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.923172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.923441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.923498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.923700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.923735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.923945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.923978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.924160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.924192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.924393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.924426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.924583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.924616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.924852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.924891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.925111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.925144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.925343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.925376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.925530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.925562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.925767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.925803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.926007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.926039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.926387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.926446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.926671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.926707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.926895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.926932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.927117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.927149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.927384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.927420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.927665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.927701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.928010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.928047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.928260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.928292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.928460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.928493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.928727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.928763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.928947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.928983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.929202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.929234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.929438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.929472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.929645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.929681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.929915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.929951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.930159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.930192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.930454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.930512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.930740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.930776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.930993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.931030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.931246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.931279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.931579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.931635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.931841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.846 [2024-07-13 22:20:35.931884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.846 qpair failed and we were unable to recover it. 00:37:16.846 [2024-07-13 22:20:35.932128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.932161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.932373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.932405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.932649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.932682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.932848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.932891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.933124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.933160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.933383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.933417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.933696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.933754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.933962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.933995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.934165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.934213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.934396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.934428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.934768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.934809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.935061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.935094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.935262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.935294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.935512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.935544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.935788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.935825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.936048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.936080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.936301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.936337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.936573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.936605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.936819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.936855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.937082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.937114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.937325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.937361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.937565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.937597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.937767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.937800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.937987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.938036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.938247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.938283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.938497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.938529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.938717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.938753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.938958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.938993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.939204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.939235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.939394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.939425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.939659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.939696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.939905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.939941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.940133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.847 [2024-07-13 22:20:35.940166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.847 qpair failed and we were unable to recover it. 00:37:16.847 [2024-07-13 22:20:35.940374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.940406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.940622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.940658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.940891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.940924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.941111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.941161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.941372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.941404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.941676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.941733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.941978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.942011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.942231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.942281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.942465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.942497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.942743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.942779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.942989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.943025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.943208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.943244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.943431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.943463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.943655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.943687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.943888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.943925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.944139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.944171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.944360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.944393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.944668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.944729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.944962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.944995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.945176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.945212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.945421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.945452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.945670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.945721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.945932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.945967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.946208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.946248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.946413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.946445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.946634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.946669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.946845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.946991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.947204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.947240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.947459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.947491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.947708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.947741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.947953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.947986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.948150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.948182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.948363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.948395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.948557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.948589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.948777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.948809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.949042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.949078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.949274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.949307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.949492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.949528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.949769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.848 [2024-07-13 22:20:35.949800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.848 qpair failed and we were unable to recover it. 00:37:16.848 [2024-07-13 22:20:35.950007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.950050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.950259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.950291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.950609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.950680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.950909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.950945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.951239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.951275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.951488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.951520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.951759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.951795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.952003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.952039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.952249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.952285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.952478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.952510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.952678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.952710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.952925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.952976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.953169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.953205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.953391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.953422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.953615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.953646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.953873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.953922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.954086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.954117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.954283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.954315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.954541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.954578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.954792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.954843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.955048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.955084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.955265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.955297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.955495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.955527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.955769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.955805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.955998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.956034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.956279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.956311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.956566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.956622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.956805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.956842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.957061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.957097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.957303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.957335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.957570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.957626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.957832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.957875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.958113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.958149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.958353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.958385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.958670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.958730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.958956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.958992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.959197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.959233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.959444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.959476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.959702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.959760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.959974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.960012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.849 [2024-07-13 22:20:35.960219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.849 [2024-07-13 22:20:35.960275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.849 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.960525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.960558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.960772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.960808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.961042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.961077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.961298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.961334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.961537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.961570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.961801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.961835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.961999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.962035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.962298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.962335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.962542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.962576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.962768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.962800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.962986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.963019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.963231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.963265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.963499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.963530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.963740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.963777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.963980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.964012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.964199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.964230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.964442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.964473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.964674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.964715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.964921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.964956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.965144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.965180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.965383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.965416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.965588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.965621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.965804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.965836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.966008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.966085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.966297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.966328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.966578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.966637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.966836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.966878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.967087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.967122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.967327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.967358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.967569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.967604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.967804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.967839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.968038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.968073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.968259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.968291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.968486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.968519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.968703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.968738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.968944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.968977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.850 [2024-07-13 22:20:35.969160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.850 [2024-07-13 22:20:35.969192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.850 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.969503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.969562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.969788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.969824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.970049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.970085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.970278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.970310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.970525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.970557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.970793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.970829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.971039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.971075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.971259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.971291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.971541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.971599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.971830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.971871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.972096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.972128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.972310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.972342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.972563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.972598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.972802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.972834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.973025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.973058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.973218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.973251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.973465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.973501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.973727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.973762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.973961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.973998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.974216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.974248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.974476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.974513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.974721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.974758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.974966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.975003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.975208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.975240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.975465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.975501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.975689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.975726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.975951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.975983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.976137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.976169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.976353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.976389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.976619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.976655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.976894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.976930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.977122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.977155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.977491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.977557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.977769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.977806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.978053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.978090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.978292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.851 [2024-07-13 22:20:35.978325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.851 qpair failed and we were unable to recover it. 00:37:16.851 [2024-07-13 22:20:35.978641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.978702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.978940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.978973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.979185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.979221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.979456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.979488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.979789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.979846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.980087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.980119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.980329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.980361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.980550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.980581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.980794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.980830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.981100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.981133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.981346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.981379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.981574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.981606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.981773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.981806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.982020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.982053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.982246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.982288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.982478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.982510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.982746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.982782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.982955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.982991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.983224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.983256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.983450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.983481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.983701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.983737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.983972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.984005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.984193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.984226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.984454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.984486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.984700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.984741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.984951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.984989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.985174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.985210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.985423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.985455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.985654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.985686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.985896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.985934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.986145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.986182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.986365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.986398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.986591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.986624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.986853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.852 [2024-07-13 22:20:35.986896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.852 qpair failed and we were unable to recover it. 00:37:16.852 [2024-07-13 22:20:35.987137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.987170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.987336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.987368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.987578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.987615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.987817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.987853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.988040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.988076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.988291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.988322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.988543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.988578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.988788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.988823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.989068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.989100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.989282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.989314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.989513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.989548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.989776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.989812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.990005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.990042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.990227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.990259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.990559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.990624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.990856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.990903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.991113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.991149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.991376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.991409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.991604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.991651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.991864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.991910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.992121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.992157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.992338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.992371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.992607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.992643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.992832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.992864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.993034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.993066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.993252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.993283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.993594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.993651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.993889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.993922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.994133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.994169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.994362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.994394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.994699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.994764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.994994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.995026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.995263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.995299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.995536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.995568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.995786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.995835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.996079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.996127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.996356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.996396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.996635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.996669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.996854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.853 [2024-07-13 22:20:35.996922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.853 qpair failed and we were unable to recover it. 00:37:16.853 [2024-07-13 22:20:35.997083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:35.997117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:35.997337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:35.997371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:35.997557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:35.997590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:35.997914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:35.997970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:35.998172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:35.998205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:35.998378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:35.998411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:35.998601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:35.998635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:35.998848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:35.998892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:35.999103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:35.999136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:35.999360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:35.999396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:35.999580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:35.999613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:35.999794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:35.999830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.000049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.000082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.000297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.000333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.000543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.000576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.000838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.000878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.001042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.001085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.001245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.001277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.001530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.001563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.001748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.001784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.002003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.002037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.002230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.002264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.002476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.002509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.002684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.002720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.002949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.002982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.003186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.003220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.003432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.003465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.003651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.003687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.003890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.003927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.004124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.004161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.004394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.004427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.004592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.004631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.004839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.004882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.005092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.005125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.005310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.005343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.005653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.005717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.005975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.006010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.006200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.006234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.006448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.006482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.854 [2024-07-13 22:20:36.006648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.854 [2024-07-13 22:20:36.006681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.854 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.006931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.006966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.007121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.007154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.007406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.007439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.007744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.007806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.007993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.008027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.008223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.008257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.008445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.008478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.008823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.008890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.009101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.009134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.009325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.009362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.009550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.009584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.009799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.009832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.010027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.010061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.010275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.010311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.010515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.010549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.010723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.010760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.010967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.011002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.011243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.011280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.011490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.011523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.011754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.011792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.011979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.012014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.012222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.012260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.012495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.012528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.012699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.012736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.012928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.012963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.013126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.013159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.013345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.013378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.013607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.013666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.013907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.013940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.014125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.014158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.014409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.014442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.014652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.014699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.014907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.014959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.015113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.015146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.015359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.015392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.015606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.015642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.015879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.015929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.016092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.016125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.016354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.016388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.016701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.016762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.855 [2024-07-13 22:20:36.016993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.855 [2024-07-13 22:20:36.017042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.855 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.017254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.017287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.017450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.017483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.017694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.017730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.017975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.018013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.018188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.018225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.018432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.018465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.018671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.018707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.018895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.018934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.019113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.019150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.019360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.019393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.019576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.019612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.019790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.019827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.020041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.020077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.020287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.020319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.020505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.020538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.020754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.020790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.020970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.021008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.021225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.021258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.021441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.021474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.021641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.021675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.021876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.021914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.022124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.022157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.022477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.022548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.022789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.022823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.023024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.023076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.023314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.023347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.023561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.023593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.023803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.023836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.023999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.024033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.024193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.024227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.024545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.024618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.024850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.024901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.025124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.025158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.025369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.025402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.856 [2024-07-13 22:20:36.025561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.856 [2024-07-13 22:20:36.025593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.856 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.025755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.025788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.025991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.026028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.026202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.026240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.026531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.026589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.026791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.026827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.027044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.027081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.027286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.027321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.027563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.027623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.027823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.027859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.028087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.028120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.028301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.028334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.028543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.028580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.028776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.028813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.029024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.029061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.029280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.029312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.029496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.029533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.029764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.029801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.030033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.030071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.030289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.030322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.030499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.030532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.030706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.030743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.030968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.031006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.031223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.031256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.031476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.031513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.031714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.031750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.031957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.031994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.032229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.032263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.032516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.032549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.032787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.032830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.033050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.033084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.033292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.033325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.033507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.033544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.033778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.033811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.034015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.034064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.034255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.034288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.034556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.034622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.034829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.034876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.035087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.035123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.035311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.035344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.035552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.035609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.857 qpair failed and we were unable to recover it. 00:37:16.857 [2024-07-13 22:20:36.035837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.857 [2024-07-13 22:20:36.035881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.036111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.036147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.036340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.036373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.036561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.036598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.036811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.036843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.037036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.037084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.037296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.037328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.037510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.037544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.037781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.037817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.038011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.038044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.038209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.038243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.038505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.038564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.038775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.038812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.039048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.039085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.039292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.039325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.039534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.039584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.039816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.039849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.040069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.040106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.040313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.040346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.040558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.040595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.040796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.040833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.041020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.041057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.041271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.041305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.041644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.041702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.041936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.041974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.042204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.042240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.042481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.042514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.042761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.042795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.043007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.043044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.043256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.043290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.043503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.043537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.043776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.043808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.044043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.044080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.044276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.044312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.044557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.044590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.044782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.044835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.045056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.045089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.045272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.045309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.045497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.045530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.045740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.045790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.045999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.858 [2024-07-13 22:20:36.046036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.858 qpair failed and we were unable to recover it. 00:37:16.858 [2024-07-13 22:20:36.046244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.046281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.046469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.046502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.046684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.046717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.046875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.046908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.047106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.047143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.047356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.047389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.047632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.047691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.047887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.047937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.048104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.048139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.048396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.048429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.048660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.048695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.048912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.048960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.049160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.049196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.049409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.049442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.049671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.049708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.049952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.049990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.050204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.050237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.050425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.050457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.050756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.050817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.051035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.051069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.051304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.051341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.051528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.051562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.051805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.051841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.052080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.052116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.052320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.052357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.052567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.052600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.052793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.052826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.053057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.053094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.053324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.053361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.053551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.053584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.053799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.053835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.054090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.054124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.054309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.054342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.054507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.054540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.054775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.054812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.055038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.055072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.055318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.055354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.055566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.055599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.055825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.055861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.056081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.056113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.056334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.859 [2024-07-13 22:20:36.056366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.859 qpair failed and we were unable to recover it. 00:37:16.859 [2024-07-13 22:20:36.056556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.056588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.056749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.056782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.056948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.056982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.057229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.057261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.057418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.057450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.057705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.057741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.057967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.058004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.058216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.058249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.058429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.058468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.058678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.058728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.058936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.058973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.059207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.059245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.059441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.059474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.059689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.059726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.059930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.059968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.060208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.060242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.060427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.060459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.060649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.060686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.060903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.060938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.061150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.061186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.061393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.061431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.061690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.061745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.061959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.061995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.062177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.062214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.062413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.062446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.062645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.062678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.062922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.062959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.063173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.063205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.063363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.063396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.063640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.063698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.063951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.063985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.064146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.064179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.064368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.064401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.064694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.064753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.064969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.860 [2024-07-13 22:20:36.065012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.860 qpair failed and we were unable to recover it. 00:37:16.860 [2024-07-13 22:20:36.065201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.861 [2024-07-13 22:20:36.065234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.861 qpair failed and we were unable to recover it. 00:37:16.861 [2024-07-13 22:20:36.065446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.861 [2024-07-13 22:20:36.065479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.861 qpair failed and we were unable to recover it. 00:37:16.861 [2024-07-13 22:20:36.065691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.861 [2024-07-13 22:20:36.065727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.861 qpair failed and we were unable to recover it. 00:37:16.861 [2024-07-13 22:20:36.065958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.861 [2024-07-13 22:20:36.065995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.861 qpair failed and we were unable to recover it. 00:37:16.861 [2024-07-13 22:20:36.066181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.861 [2024-07-13 22:20:36.066217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.861 qpair failed and we were unable to recover it. 00:37:16.861 [2024-07-13 22:20:36.066451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.861 [2024-07-13 22:20:36.066484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.861 qpair failed and we were unable to recover it. 00:37:16.861 [2024-07-13 22:20:36.066670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.861 [2024-07-13 22:20:36.066703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.861 qpair failed and we were unable to recover it. 00:37:16.861 [2024-07-13 22:20:36.066949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.861 [2024-07-13 22:20:36.066986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.861 qpair failed and we were unable to recover it. 00:37:16.861 [2024-07-13 22:20:36.067160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.861 [2024-07-13 22:20:36.067197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.861 qpair failed and we were unable to recover it. 00:37:16.861 [2024-07-13 22:20:36.067443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.861 [2024-07-13 22:20:36.067476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.861 qpair failed and we were unable to recover it. 00:37:16.861 [2024-07-13 22:20:36.067677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.861 [2024-07-13 22:20:36.067713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.861 qpair failed and we were unable to recover it. 00:37:16.861 [2024-07-13 22:20:36.067920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.861 [2024-07-13 22:20:36.067957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.861 qpair failed and we were unable to recover it. 00:37:16.861 [2024-07-13 22:20:36.068184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.861 [2024-07-13 22:20:36.068217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.861 qpair failed and we were unable to recover it. 00:37:16.861 [2024-07-13 22:20:36.068423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.861 [2024-07-13 22:20:36.068455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.861 qpair failed and we were unable to recover it. 00:37:16.861 [2024-07-13 22:20:36.068695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.861 [2024-07-13 22:20:36.068731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.861 qpair failed and we were unable to recover it. 00:37:16.861 [2024-07-13 22:20:36.068965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.861 [2024-07-13 22:20:36.068999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.861 qpair failed and we were unable to recover it. 00:37:16.861 [2024-07-13 22:20:36.069185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.861 [2024-07-13 22:20:36.069221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.861 qpair failed and we were unable to recover it. 00:37:16.861 [2024-07-13 22:20:36.069398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.861 [2024-07-13 22:20:36.069431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.861 qpair failed and we were unable to recover it. 00:37:16.861 [2024-07-13 22:20:36.069644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.861 [2024-07-13 22:20:36.069677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.861 qpair failed and we were unable to recover it. 00:37:16.861 [2024-07-13 22:20:36.069878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.861 [2024-07-13 22:20:36.069912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.861 qpair failed and we were unable to recover it. 00:37:16.861 [2024-07-13 22:20:36.070122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.861 [2024-07-13 22:20:36.070155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.861 qpair failed and we were unable to recover it. 00:37:16.861 [2024-07-13 22:20:36.070342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.861 [2024-07-13 22:20:36.070374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.861 qpair failed and we were unable to recover it. 00:37:16.861 [2024-07-13 22:20:36.070601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.862 [2024-07-13 22:20:36.070634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.862 qpair failed and we were unable to recover it. 00:37:16.862 [2024-07-13 22:20:36.070859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.862 [2024-07-13 22:20:36.070918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.862 qpair failed and we were unable to recover it. 00:37:16.862 [2024-07-13 22:20:36.071107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.862 [2024-07-13 22:20:36.071144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.862 qpair failed and we were unable to recover it. 00:37:16.862 [2024-07-13 22:20:36.071379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.862 [2024-07-13 22:20:36.071416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.862 qpair failed and we were unable to recover it. 00:37:16.862 [2024-07-13 22:20:36.071671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.862 [2024-07-13 22:20:36.071727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.862 qpair failed and we were unable to recover it. 00:37:16.862 [2024-07-13 22:20:36.071936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.862 [2024-07-13 22:20:36.071973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.862 qpair failed and we were unable to recover it. 00:37:16.862 [2024-07-13 22:20:36.072184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.862 [2024-07-13 22:20:36.072216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.862 qpair failed and we were unable to recover it. 00:37:16.862 [2024-07-13 22:20:36.072427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.862 [2024-07-13 22:20:36.072460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.862 qpair failed and we were unable to recover it. 00:37:16.862 [2024-07-13 22:20:36.072698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.862 [2024-07-13 22:20:36.072735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.862 qpair failed and we were unable to recover it. 00:37:16.862 [2024-07-13 22:20:36.072939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.862 [2024-07-13 22:20:36.072975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.862 qpair failed and we were unable to recover it. 00:37:16.862 [2024-07-13 22:20:36.073181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.862 [2024-07-13 22:20:36.073217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.862 qpair failed and we were unable to recover it. 00:37:16.862 [2024-07-13 22:20:36.073427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.862 [2024-07-13 22:20:36.073460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.862 qpair failed and we were unable to recover it. 00:37:16.862 [2024-07-13 22:20:36.073645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.862 [2024-07-13 22:20:36.073682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.862 qpair failed and we were unable to recover it. 00:37:16.862 [2024-07-13 22:20:36.073894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.862 [2024-07-13 22:20:36.073931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.862 qpair failed and we were unable to recover it. 00:37:16.862 [2024-07-13 22:20:36.074138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-07-13 22:20:36.074174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.863 qpair failed and we were unable to recover it. 00:37:16.863 [2024-07-13 22:20:36.074400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-07-13 22:20:36.074433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.863 qpair failed and we were unable to recover it. 00:37:16.863 [2024-07-13 22:20:36.074624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-07-13 22:20:36.074656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.863 qpair failed and we were unable to recover it. 00:37:16.863 [2024-07-13 22:20:36.074872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-07-13 22:20:36.074909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.863 qpair failed and we were unable to recover it. 00:37:16.863 [2024-07-13 22:20:36.075119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-07-13 22:20:36.075158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.863 qpair failed and we were unable to recover it. 00:37:16.863 [2024-07-13 22:20:36.075395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-07-13 22:20:36.075428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.863 qpair failed and we were unable to recover it. 00:37:16.863 [2024-07-13 22:20:36.075720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-07-13 22:20:36.075777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.863 qpair failed and we were unable to recover it. 00:37:16.863 [2024-07-13 22:20:36.076000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-07-13 22:20:36.076034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.863 qpair failed and we were unable to recover it. 00:37:16.863 [2024-07-13 22:20:36.076245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-07-13 22:20:36.076283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.863 qpair failed and we were unable to recover it. 00:37:16.863 [2024-07-13 22:20:36.076478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-07-13 22:20:36.076511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.863 qpair failed and we were unable to recover it. 00:37:16.863 [2024-07-13 22:20:36.076721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-07-13 22:20:36.076758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.863 qpair failed and we were unable to recover it. 00:37:16.863 [2024-07-13 22:20:36.076931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-07-13 22:20:36.076968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.863 qpair failed and we were unable to recover it. 00:37:16.863 [2024-07-13 22:20:36.077143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-07-13 22:20:36.077180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.863 qpair failed and we were unable to recover it. 00:37:16.863 [2024-07-13 22:20:36.077413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-07-13 22:20:36.077446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.863 qpair failed and we were unable to recover it. 00:37:16.863 [2024-07-13 22:20:36.077687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-07-13 22:20:36.077720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.863 qpair failed and we were unable to recover it. 00:37:16.863 [2024-07-13 22:20:36.077878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-07-13 22:20:36.077912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.863 qpair failed and we were unable to recover it. 00:37:16.863 [2024-07-13 22:20:36.078090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-07-13 22:20:36.078128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.863 qpair failed and we were unable to recover it. 00:37:16.863 [2024-07-13 22:20:36.078347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-07-13 22:20:36.078380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.863 qpair failed and we were unable to recover it. 00:37:16.863 [2024-07-13 22:20:36.078620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-07-13 22:20:36.078678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.863 qpair failed and we were unable to recover it. 00:37:16.863 [2024-07-13 22:20:36.078882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.863 [2024-07-13 22:20:36.078918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.863 qpair failed and we were unable to recover it. 00:37:16.863 [2024-07-13 22:20:36.079114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.864 [2024-07-13 22:20:36.079150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.864 qpair failed and we were unable to recover it. 00:37:16.864 [2024-07-13 22:20:36.079389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.864 [2024-07-13 22:20:36.079422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.864 qpair failed and we were unable to recover it. 00:37:16.864 [2024-07-13 22:20:36.079740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.864 [2024-07-13 22:20:36.079806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.864 qpair failed and we were unable to recover it. 00:37:16.864 [2024-07-13 22:20:36.080051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.864 [2024-07-13 22:20:36.080084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.864 qpair failed and we were unable to recover it. 00:37:16.864 [2024-07-13 22:20:36.080296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.864 [2024-07-13 22:20:36.080333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.864 qpair failed and we were unable to recover it. 00:37:16.864 [2024-07-13 22:20:36.080519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.864 [2024-07-13 22:20:36.080552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.864 qpair failed and we were unable to recover it. 00:37:16.864 [2024-07-13 22:20:36.080735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.864 [2024-07-13 22:20:36.080773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.864 qpair failed and we were unable to recover it. 00:37:16.864 [2024-07-13 22:20:36.080955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.864 [2024-07-13 22:20:36.081003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.864 qpair failed and we were unable to recover it. 00:37:16.864 [2024-07-13 22:20:36.081216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.864 [2024-07-13 22:20:36.081249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.864 qpair failed and we were unable to recover it. 00:37:16.864 [2024-07-13 22:20:36.081437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.864 [2024-07-13 22:20:36.081474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.864 qpair failed and we were unable to recover it. 00:37:16.864 [2024-07-13 22:20:36.081708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.864 [2024-07-13 22:20:36.081745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.864 qpair failed and we were unable to recover it. 00:37:16.864 [2024-07-13 22:20:36.081948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.864 [2024-07-13 22:20:36.081984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.864 qpair failed and we were unable to recover it. 00:37:16.864 [2024-07-13 22:20:36.082188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.864 [2024-07-13 22:20:36.082224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.864 qpair failed and we were unable to recover it. 00:37:16.864 [2024-07-13 22:20:36.082458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.864 [2024-07-13 22:20:36.082491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.864 qpair failed and we were unable to recover it. 00:37:16.864 [2024-07-13 22:20:36.082678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.864 [2024-07-13 22:20:36.082714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.864 qpair failed and we were unable to recover it. 00:37:16.864 [2024-07-13 22:20:36.082919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.864 [2024-07-13 22:20:36.082958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.864 qpair failed and we were unable to recover it. 00:37:16.865 [2024-07-13 22:20:36.083191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.865 [2024-07-13 22:20:36.083228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.865 qpair failed and we were unable to recover it. 00:37:16.865 [2024-07-13 22:20:36.083431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.865 [2024-07-13 22:20:36.083464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.865 qpair failed and we were unable to recover it. 00:37:16.865 [2024-07-13 22:20:36.083649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.865 [2024-07-13 22:20:36.083685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.865 qpair failed and we were unable to recover it. 00:37:16.865 [2024-07-13 22:20:36.083893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.865 [2024-07-13 22:20:36.083929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.865 qpair failed and we were unable to recover it. 00:37:16.865 [2024-07-13 22:20:36.084133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.865 [2024-07-13 22:20:36.084169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.865 qpair failed and we were unable to recover it. 00:37:16.865 [2024-07-13 22:20:36.084377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.865 [2024-07-13 22:20:36.084409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.865 qpair failed and we were unable to recover it. 00:37:16.865 [2024-07-13 22:20:36.084657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.865 [2024-07-13 22:20:36.084716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.865 qpair failed and we were unable to recover it. 00:37:16.865 [2024-07-13 22:20:36.084929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.865 [2024-07-13 22:20:36.084966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.865 qpair failed and we were unable to recover it. 00:37:16.865 [2024-07-13 22:20:36.085176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.865 [2024-07-13 22:20:36.085213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.865 qpair failed and we were unable to recover it. 00:37:16.865 [2024-07-13 22:20:36.085419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.865 [2024-07-13 22:20:36.085453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.865 qpair failed and we were unable to recover it. 00:37:16.865 [2024-07-13 22:20:36.085661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.865 [2024-07-13 22:20:36.085697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.865 qpair failed and we were unable to recover it. 00:37:16.865 [2024-07-13 22:20:36.085930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.865 [2024-07-13 22:20:36.085969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.865 qpair failed and we were unable to recover it. 00:37:16.865 [2024-07-13 22:20:36.086185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.865 [2024-07-13 22:20:36.086221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.865 qpair failed and we were unable to recover it. 00:37:16.865 [2024-07-13 22:20:36.086405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.865 [2024-07-13 22:20:36.086438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.865 qpair failed and we were unable to recover it. 00:37:16.865 [2024-07-13 22:20:36.086639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.866 [2024-07-13 22:20:36.086676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.866 qpair failed and we were unable to recover it. 00:37:16.866 [2024-07-13 22:20:36.086877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.866 [2024-07-13 22:20:36.086913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.866 qpair failed and we were unable to recover it. 00:37:16.866 [2024-07-13 22:20:36.087140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.866 [2024-07-13 22:20:36.087176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.866 qpair failed and we were unable to recover it. 00:37:16.866 [2024-07-13 22:20:36.087392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.866 [2024-07-13 22:20:36.087424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.866 qpair failed and we were unable to recover it. 00:37:16.866 [2024-07-13 22:20:36.087612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.866 [2024-07-13 22:20:36.087645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.866 qpair failed and we were unable to recover it. 00:37:16.866 [2024-07-13 22:20:36.087862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.866 [2024-07-13 22:20:36.087900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.866 qpair failed and we were unable to recover it. 00:37:16.866 [2024-07-13 22:20:36.088114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.866 [2024-07-13 22:20:36.088150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.866 qpair failed and we were unable to recover it. 00:37:16.866 [2024-07-13 22:20:36.088362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.866 [2024-07-13 22:20:36.088395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.866 qpair failed and we were unable to recover it. 00:37:16.866 [2024-07-13 22:20:36.088607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.866 [2024-07-13 22:20:36.088657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.866 qpair failed and we were unable to recover it. 00:37:16.866 [2024-07-13 22:20:36.088851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.866 [2024-07-13 22:20:36.088896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.866 qpair failed and we were unable to recover it. 00:37:16.866 [2024-07-13 22:20:36.089127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.866 [2024-07-13 22:20:36.089178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.866 qpair failed and we were unable to recover it. 00:37:16.866 [2024-07-13 22:20:36.089397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.866 [2024-07-13 22:20:36.089431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.866 qpair failed and we were unable to recover it. 00:37:16.866 [2024-07-13 22:20:36.089696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.866 [2024-07-13 22:20:36.089754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.866 qpair failed and we were unable to recover it. 00:37:16.866 [2024-07-13 22:20:36.089995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.866 [2024-07-13 22:20:36.090032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.866 qpair failed and we were unable to recover it. 00:37:16.867 [2024-07-13 22:20:36.090229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.867 [2024-07-13 22:20:36.090266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.867 qpair failed and we were unable to recover it. 00:37:16.867 [2024-07-13 22:20:36.090457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.867 [2024-07-13 22:20:36.090490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.867 qpair failed and we were unable to recover it. 00:37:16.867 [2024-07-13 22:20:36.090832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.867 [2024-07-13 22:20:36.090897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.867 qpair failed and we were unable to recover it. 00:37:16.867 [2024-07-13 22:20:36.091083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.867 [2024-07-13 22:20:36.091120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.867 qpair failed and we were unable to recover it. 00:37:16.867 [2024-07-13 22:20:36.091325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.867 [2024-07-13 22:20:36.091363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.867 qpair failed and we were unable to recover it. 00:37:16.867 [2024-07-13 22:20:36.091584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.867 [2024-07-13 22:20:36.091622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.867 qpair failed and we were unable to recover it. 00:37:16.867 [2024-07-13 22:20:36.091811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.867 [2024-07-13 22:20:36.091844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.867 qpair failed and we were unable to recover it. 00:37:16.867 [2024-07-13 22:20:36.092057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.867 [2024-07-13 22:20:36.092094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.867 qpair failed and we were unable to recover it. 00:37:16.867 [2024-07-13 22:20:36.092334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.867 [2024-07-13 22:20:36.092367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.867 qpair failed and we were unable to recover it. 00:37:16.867 [2024-07-13 22:20:36.092557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.867 [2024-07-13 22:20:36.092590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.867 qpair failed and we were unable to recover it. 00:37:16.867 [2024-07-13 22:20:36.092830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.867 [2024-07-13 22:20:36.092872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.867 qpair failed and we were unable to recover it. 00:37:16.867 [2024-07-13 22:20:36.093109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.867 [2024-07-13 22:20:36.093142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.867 qpair failed and we were unable to recover it. 00:37:16.867 [2024-07-13 22:20:36.093328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.867 [2024-07-13 22:20:36.093365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.867 qpair failed and we were unable to recover it. 00:37:16.867 [2024-07-13 22:20:36.093571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.867 [2024-07-13 22:20:36.093603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.867 qpair failed and we were unable to recover it. 00:37:16.867 [2024-07-13 22:20:36.093784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.867 [2024-07-13 22:20:36.093821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.867 qpair failed and we were unable to recover it. 00:37:16.867 [2024-07-13 22:20:36.094025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.867 [2024-07-13 22:20:36.094059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.867 qpair failed and we were unable to recover it. 00:37:16.867 [2024-07-13 22:20:36.094250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.867 [2024-07-13 22:20:36.094287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.867 qpair failed and we were unable to recover it. 00:37:16.867 [2024-07-13 22:20:36.094474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.867 [2024-07-13 22:20:36.094507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.867 qpair failed and we were unable to recover it. 00:37:16.867 [2024-07-13 22:20:36.094688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.867 [2024-07-13 22:20:36.094720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.868 qpair failed and we were unable to recover it. 00:37:16.868 [2024-07-13 22:20:36.094938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.868 [2024-07-13 22:20:36.094976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.868 qpair failed and we were unable to recover it. 00:37:16.868 [2024-07-13 22:20:36.095157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.868 [2024-07-13 22:20:36.095194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.868 qpair failed and we were unable to recover it. 00:37:16.868 [2024-07-13 22:20:36.095395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.868 [2024-07-13 22:20:36.095428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.868 qpair failed and we were unable to recover it. 00:37:16.868 [2024-07-13 22:20:36.095611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.868 [2024-07-13 22:20:36.095644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.868 qpair failed and we were unable to recover it. 00:37:16.868 [2024-07-13 22:20:36.095810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.868 [2024-07-13 22:20:36.095843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.868 qpair failed and we were unable to recover it. 00:37:16.868 [2024-07-13 22:20:36.096043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.868 [2024-07-13 22:20:36.096075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.868 qpair failed and we were unable to recover it. 00:37:16.868 [2024-07-13 22:20:36.096286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.868 [2024-07-13 22:20:36.096319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.868 qpair failed and we were unable to recover it. 00:37:16.868 [2024-07-13 22:20:36.096513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.868 [2024-07-13 22:20:36.096547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.868 qpair failed and we were unable to recover it. 00:37:16.868 [2024-07-13 22:20:36.096710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.868 [2024-07-13 22:20:36.096753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.868 qpair failed and we were unable to recover it. 00:37:16.868 [2024-07-13 22:20:36.096964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.868 [2024-07-13 22:20:36.096997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.868 qpair failed and we were unable to recover it. 00:37:16.868 [2024-07-13 22:20:36.097180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.868 [2024-07-13 22:20:36.097213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.868 qpair failed and we were unable to recover it. 00:37:16.868 [2024-07-13 22:20:36.097372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.868 [2024-07-13 22:20:36.097422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.868 qpair failed and we were unable to recover it. 00:37:16.868 [2024-07-13 22:20:36.097602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.868 [2024-07-13 22:20:36.097639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.869 qpair failed and we were unable to recover it. 00:37:16.869 [2024-07-13 22:20:36.097826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.869 [2024-07-13 22:20:36.097862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.869 qpair failed and we were unable to recover it. 00:37:16.869 [2024-07-13 22:20:36.098047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.869 [2024-07-13 22:20:36.098080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.869 qpair failed and we were unable to recover it. 00:37:16.869 [2024-07-13 22:20:36.098268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.869 [2024-07-13 22:20:36.098303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.869 qpair failed and we were unable to recover it. 00:37:16.869 [2024-07-13 22:20:36.098467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.869 [2024-07-13 22:20:36.098500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.869 qpair failed and we were unable to recover it. 00:37:16.869 [2024-07-13 22:20:36.098658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.869 [2024-07-13 22:20:36.098691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.869 qpair failed and we were unable to recover it. 00:37:16.869 [2024-07-13 22:20:36.098894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.869 [2024-07-13 22:20:36.098944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.869 qpair failed and we were unable to recover it. 00:37:16.869 [2024-07-13 22:20:36.099105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.869 [2024-07-13 22:20:36.099138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.869 qpair failed and we were unable to recover it. 00:37:16.869 [2024-07-13 22:20:36.099297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.869 [2024-07-13 22:20:36.099330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.869 qpair failed and we were unable to recover it. 00:37:16.869 [2024-07-13 22:20:36.099540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.869 [2024-07-13 22:20:36.099573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.869 qpair failed and we were unable to recover it. 00:37:16.869 [2024-07-13 22:20:36.099756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.869 [2024-07-13 22:20:36.099789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.869 qpair failed and we were unable to recover it. 00:37:16.869 [2024-07-13 22:20:36.099982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.869 [2024-07-13 22:20:36.100019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.869 qpair failed and we were unable to recover it. 00:37:16.869 [2024-07-13 22:20:36.100200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.869 [2024-07-13 22:20:36.100237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.869 qpair failed and we were unable to recover it. 00:37:16.869 [2024-07-13 22:20:36.100446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.869 [2024-07-13 22:20:36.100483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.869 qpair failed and we were unable to recover it. 00:37:16.869 [2024-07-13 22:20:36.100667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.869 [2024-07-13 22:20:36.100704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.869 qpair failed and we were unable to recover it. 00:37:16.869 [2024-07-13 22:20:36.100884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.869 [2024-07-13 22:20:36.100917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.870 qpair failed and we were unable to recover it. 00:37:16.870 [2024-07-13 22:20:36.101081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.870 [2024-07-13 22:20:36.101114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.870 qpair failed and we were unable to recover it. 00:37:16.870 [2024-07-13 22:20:36.101302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.870 [2024-07-13 22:20:36.101335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.870 qpair failed and we were unable to recover it. 00:37:16.870 [2024-07-13 22:20:36.101528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.870 [2024-07-13 22:20:36.101561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.870 qpair failed and we were unable to recover it. 00:37:16.870 [2024-07-13 22:20:36.101746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.870 [2024-07-13 22:20:36.101779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.870 qpair failed and we were unable to recover it. 00:37:16.870 [2024-07-13 22:20:36.101936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.870 [2024-07-13 22:20:36.101969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.870 qpair failed and we were unable to recover it. 00:37:16.870 [2024-07-13 22:20:36.102156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.870 [2024-07-13 22:20:36.102190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.870 qpair failed and we were unable to recover it. 00:37:16.870 [2024-07-13 22:20:36.102355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.870 [2024-07-13 22:20:36.102388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.870 qpair failed and we were unable to recover it. 00:37:16.870 [2024-07-13 22:20:36.102584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.870 [2024-07-13 22:20:36.102617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.870 qpair failed and we were unable to recover it. 00:37:16.870 [2024-07-13 22:20:36.102775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.870 [2024-07-13 22:20:36.102809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.870 qpair failed and we were unable to recover it. 00:37:16.870 [2024-07-13 22:20:36.102980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.870 [2024-07-13 22:20:36.103013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.870 qpair failed and we were unable to recover it. 00:37:16.870 [2024-07-13 22:20:36.103173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.870 [2024-07-13 22:20:36.103205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.870 qpair failed and we were unable to recover it. 00:37:16.870 [2024-07-13 22:20:36.103412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.870 [2024-07-13 22:20:36.103445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.870 qpair failed and we were unable to recover it. 00:37:16.870 [2024-07-13 22:20:36.103608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.870 [2024-07-13 22:20:36.103641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.870 qpair failed and we were unable to recover it. 00:37:16.870 [2024-07-13 22:20:36.103824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.870 [2024-07-13 22:20:36.103857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.870 qpair failed and we were unable to recover it. 00:37:16.870 [2024-07-13 22:20:36.104018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.870 [2024-07-13 22:20:36.104051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.870 qpair failed and we were unable to recover it. 00:37:16.871 [2024-07-13 22:20:36.104239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.871 [2024-07-13 22:20:36.104277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.871 qpair failed and we were unable to recover it. 00:37:16.871 [2024-07-13 22:20:36.104449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.871 [2024-07-13 22:20:36.104486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.871 qpair failed and we were unable to recover it. 00:37:16.871 [2024-07-13 22:20:36.104692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.871 [2024-07-13 22:20:36.104728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.871 qpair failed and we were unable to recover it. 00:37:16.871 [2024-07-13 22:20:36.104908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.871 [2024-07-13 22:20:36.104942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.871 qpair failed and we were unable to recover it. 00:37:16.871 [2024-07-13 22:20:36.105133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.871 [2024-07-13 22:20:36.105167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.871 qpair failed and we were unable to recover it. 00:37:16.871 [2024-07-13 22:20:36.105328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.871 [2024-07-13 22:20:36.105361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.871 qpair failed and we were unable to recover it. 00:37:16.871 [2024-07-13 22:20:36.105582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.871 [2024-07-13 22:20:36.105615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.871 qpair failed and we were unable to recover it. 00:37:16.871 [2024-07-13 22:20:36.105805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.871 [2024-07-13 22:20:36.105838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.871 qpair failed and we were unable to recover it. 00:37:16.871 [2024-07-13 22:20:36.106074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.871 [2024-07-13 22:20:36.106111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.871 qpair failed and we were unable to recover it. 00:37:16.871 [2024-07-13 22:20:36.106307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.871 [2024-07-13 22:20:36.106344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.871 qpair failed and we were unable to recover it. 00:37:16.871 [2024-07-13 22:20:36.106523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.871 [2024-07-13 22:20:36.106560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.871 qpair failed and we were unable to recover it. 00:37:16.871 [2024-07-13 22:20:36.106746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.871 [2024-07-13 22:20:36.106780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.871 qpair failed and we were unable to recover it. 00:37:16.871 [2024-07-13 22:20:36.106972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.871 [2024-07-13 22:20:36.107023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.871 qpair failed and we were unable to recover it. 00:37:16.871 [2024-07-13 22:20:36.107235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.872 [2024-07-13 22:20:36.107272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.872 qpair failed and we were unable to recover it. 00:37:16.872 [2024-07-13 22:20:36.107488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.872 [2024-07-13 22:20:36.107521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.872 qpair failed and we were unable to recover it. 00:37:16.872 [2024-07-13 22:20:36.107681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.872 [2024-07-13 22:20:36.107714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.872 qpair failed and we were unable to recover it. 00:37:16.872 [2024-07-13 22:20:36.107883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.872 [2024-07-13 22:20:36.107917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.872 qpair failed and we were unable to recover it. 00:37:16.872 [2024-07-13 22:20:36.108109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.872 [2024-07-13 22:20:36.108142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.872 qpair failed and we were unable to recover it. 00:37:16.872 [2024-07-13 22:20:36.108332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.872 [2024-07-13 22:20:36.108366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.872 qpair failed and we were unable to recover it. 00:37:16.872 [2024-07-13 22:20:36.108529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.872 [2024-07-13 22:20:36.108562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.872 qpair failed and we were unable to recover it. 00:37:16.872 [2024-07-13 22:20:36.108747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.872 [2024-07-13 22:20:36.108780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.872 qpair failed and we were unable to recover it. 00:37:16.872 [2024-07-13 22:20:36.108997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.872 [2024-07-13 22:20:36.109034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.872 qpair failed and we were unable to recover it. 00:37:16.872 [2024-07-13 22:20:36.109244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.872 [2024-07-13 22:20:36.109281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.872 qpair failed and we were unable to recover it. 00:37:16.872 [2024-07-13 22:20:36.109518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.872 [2024-07-13 22:20:36.109555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.872 qpair failed and we were unable to recover it. 00:37:16.872 [2024-07-13 22:20:36.109721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.872 [2024-07-13 22:20:36.109756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.872 qpair failed and we were unable to recover it. 00:37:16.872 [2024-07-13 22:20:36.109944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.872 [2024-07-13 22:20:36.109978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.872 qpair failed and we were unable to recover it. 00:37:16.872 [2024-07-13 22:20:36.110185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.872 [2024-07-13 22:20:36.110221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.872 qpair failed and we were unable to recover it. 00:37:16.872 [2024-07-13 22:20:36.110427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.872 [2024-07-13 22:20:36.110461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.872 qpair failed and we were unable to recover it. 00:37:16.872 [2024-07-13 22:20:36.110633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.872 [2024-07-13 22:20:36.110667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.872 qpair failed and we were unable to recover it. 00:37:16.872 [2024-07-13 22:20:36.110863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.872 [2024-07-13 22:20:36.110911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.872 qpair failed and we were unable to recover it. 00:37:16.872 [2024-07-13 22:20:36.111099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.872 [2024-07-13 22:20:36.111132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.872 qpair failed and we were unable to recover it. 00:37:16.872 [2024-07-13 22:20:36.111344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.872 [2024-07-13 22:20:36.111376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.872 qpair failed and we were unable to recover it. 00:37:16.872 [2024-07-13 22:20:36.111541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.872 [2024-07-13 22:20:36.111574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.872 qpair failed and we were unable to recover it. 00:37:16.872 [2024-07-13 22:20:36.111751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.872 [2024-07-13 22:20:36.111787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.872 qpair failed and we were unable to recover it. 00:37:16.872 [2024-07-13 22:20:36.111989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.872 [2024-07-13 22:20:36.112025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.872 qpair failed and we were unable to recover it. 00:37:16.872 [2024-07-13 22:20:36.112199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.872 [2024-07-13 22:20:36.112231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.872 qpair failed and we were unable to recover it. 00:37:16.872 [2024-07-13 22:20:36.112417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.873 [2024-07-13 22:20:36.112450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.873 qpair failed and we were unable to recover it. 00:37:16.873 [2024-07-13 22:20:36.112608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.873 [2024-07-13 22:20:36.112640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.873 qpair failed and we were unable to recover it. 00:37:16.873 [2024-07-13 22:20:36.112815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.873 [2024-07-13 22:20:36.112848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.873 qpair failed and we were unable to recover it. 00:37:16.873 [2024-07-13 22:20:36.113035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.873 [2024-07-13 22:20:36.113068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.873 qpair failed and we were unable to recover it. 00:37:16.873 [2024-07-13 22:20:36.113224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.873 [2024-07-13 22:20:36.113258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.873 qpair failed and we were unable to recover it. 00:37:16.873 [2024-07-13 22:20:36.113412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.873 [2024-07-13 22:20:36.113445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.873 qpair failed and we were unable to recover it. 00:37:16.873 [2024-07-13 22:20:36.113637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.873 [2024-07-13 22:20:36.113673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.873 qpair failed and we were unable to recover it. 00:37:16.873 [2024-07-13 22:20:36.113886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.873 [2024-07-13 22:20:36.113920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.873 qpair failed and we were unable to recover it. 00:37:16.873 [2024-07-13 22:20:36.114082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.873 [2024-07-13 22:20:36.114115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.873 qpair failed and we were unable to recover it. 00:37:16.873 [2024-07-13 22:20:36.114268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.873 [2024-07-13 22:20:36.114300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.873 qpair failed and we were unable to recover it. 00:37:16.873 [2024-07-13 22:20:36.114457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.873 [2024-07-13 22:20:36.114491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.873 qpair failed and we were unable to recover it. 00:37:16.873 [2024-07-13 22:20:36.114652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.873 [2024-07-13 22:20:36.114685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.873 qpair failed and we were unable to recover it. 00:37:16.873 [2024-07-13 22:20:36.114847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.873 [2024-07-13 22:20:36.114885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.873 qpair failed and we were unable to recover it. 00:37:16.873 [2024-07-13 22:20:36.115080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.873 [2024-07-13 22:20:36.115114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.873 qpair failed and we were unable to recover it. 00:37:16.873 [2024-07-13 22:20:36.115330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.873 [2024-07-13 22:20:36.115380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.873 qpair failed and we were unable to recover it. 00:37:16.873 [2024-07-13 22:20:36.115563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.873 [2024-07-13 22:20:36.115597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.873 qpair failed and we were unable to recover it. 00:37:16.873 [2024-07-13 22:20:36.115759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.873 [2024-07-13 22:20:36.115791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.873 qpair failed and we were unable to recover it. 00:37:16.873 [2024-07-13 22:20:36.115976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.873 [2024-07-13 22:20:36.116009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.873 qpair failed and we were unable to recover it. 00:37:16.873 [2024-07-13 22:20:36.116187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.873 [2024-07-13 22:20:36.116220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.873 qpair failed and we were unable to recover it. 00:37:16.873 [2024-07-13 22:20:36.116404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.873 [2024-07-13 22:20:36.116437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.873 qpair failed and we were unable to recover it. 00:37:16.873 [2024-07-13 22:20:36.116615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.873 [2024-07-13 22:20:36.116651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.873 qpair failed and we were unable to recover it. 00:37:16.873 [2024-07-13 22:20:36.116829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.873 [2024-07-13 22:20:36.116871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.873 qpair failed and we were unable to recover it. 00:37:16.873 [2024-07-13 22:20:36.117077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.873 [2024-07-13 22:20:36.117114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.873 qpair failed and we were unable to recover it. 00:37:16.873 [2024-07-13 22:20:36.117321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.873 [2024-07-13 22:20:36.117353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.873 qpair failed and we were unable to recover it. 00:37:16.873 [2024-07-13 22:20:36.117508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.874 [2024-07-13 22:20:36.117541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.874 qpair failed and we were unable to recover it. 00:37:16.874 [2024-07-13 22:20:36.117693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.874 [2024-07-13 22:20:36.117726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.874 qpair failed and we were unable to recover it. 00:37:16.874 [2024-07-13 22:20:36.117890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.874 [2024-07-13 22:20:36.117924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.874 qpair failed and we were unable to recover it. 00:37:16.874 [2024-07-13 22:20:36.118139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.874 [2024-07-13 22:20:36.118179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.874 qpair failed and we were unable to recover it. 00:37:16.874 [2024-07-13 22:20:36.118339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.874 [2024-07-13 22:20:36.118372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.874 qpair failed and we were unable to recover it. 00:37:16.874 [2024-07-13 22:20:36.118539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.874 [2024-07-13 22:20:36.118572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.874 qpair failed and we were unable to recover it. 00:37:16.874 [2024-07-13 22:20:36.118733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.874 [2024-07-13 22:20:36.118766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.874 qpair failed and we were unable to recover it. 00:37:16.874 [2024-07-13 22:20:36.118961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.874 [2024-07-13 22:20:36.118994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.874 qpair failed and we were unable to recover it. 00:37:16.874 [2024-07-13 22:20:36.119157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.874 [2024-07-13 22:20:36.119189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.874 qpair failed and we were unable to recover it. 00:37:16.874 [2024-07-13 22:20:36.119366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.874 [2024-07-13 22:20:36.119399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.874 qpair failed and we were unable to recover it. 00:37:16.874 [2024-07-13 22:20:36.119560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.874 [2024-07-13 22:20:36.119593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.874 qpair failed and we were unable to recover it. 00:37:16.874 [2024-07-13 22:20:36.119815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.874 [2024-07-13 22:20:36.119847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.874 qpair failed and we were unable to recover it. 00:37:16.874 [2024-07-13 22:20:36.120066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.874 [2024-07-13 22:20:36.120098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.874 qpair failed and we were unable to recover it. 00:37:16.874 [2024-07-13 22:20:36.120264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.874 [2024-07-13 22:20:36.120297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.874 qpair failed and we were unable to recover it. 00:37:16.874 [2024-07-13 22:20:36.120462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.874 [2024-07-13 22:20:36.120495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.874 qpair failed and we were unable to recover it. 00:37:16.874 [2024-07-13 22:20:36.120656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.874 [2024-07-13 22:20:36.120688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.874 qpair failed and we were unable to recover it. 00:37:16.874 [2024-07-13 22:20:36.120895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.874 [2024-07-13 22:20:36.120928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.874 qpair failed and we were unable to recover it. 00:37:16.874 [2024-07-13 22:20:36.121100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.874 [2024-07-13 22:20:36.121133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.874 qpair failed and we were unable to recover it. 00:37:16.874 [2024-07-13 22:20:36.121315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.874 [2024-07-13 22:20:36.121348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.874 qpair failed and we were unable to recover it. 00:37:16.874 [2024-07-13 22:20:36.121559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.874 [2024-07-13 22:20:36.121592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.874 qpair failed and we were unable to recover it. 00:37:16.874 [2024-07-13 22:20:36.121766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.874 [2024-07-13 22:20:36.121802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.874 qpair failed and we were unable to recover it. 00:37:16.874 [2024-07-13 22:20:36.121984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.874 [2024-07-13 22:20:36.122020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.874 qpair failed and we were unable to recover it. 00:37:16.874 [2024-07-13 22:20:36.122225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.874 [2024-07-13 22:20:36.122261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.874 qpair failed and we were unable to recover it. 00:37:16.874 [2024-07-13 22:20:36.122443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.874 [2024-07-13 22:20:36.122476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.874 qpair failed and we were unable to recover it. 00:37:16.874 [2024-07-13 22:20:36.122636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.874 [2024-07-13 22:20:36.122670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.875 qpair failed and we were unable to recover it. 00:37:16.875 [2024-07-13 22:20:36.122827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.875 [2024-07-13 22:20:36.122860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.875 qpair failed and we were unable to recover it. 00:37:16.875 [2024-07-13 22:20:36.123028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.875 [2024-07-13 22:20:36.123062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.875 qpair failed and we were unable to recover it. 00:37:16.875 [2024-07-13 22:20:36.123249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.875 [2024-07-13 22:20:36.123282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.875 qpair failed and we were unable to recover it. 00:37:16.875 [2024-07-13 22:20:36.123466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.875 [2024-07-13 22:20:36.123498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.875 qpair failed and we were unable to recover it. 00:37:16.875 [2024-07-13 22:20:36.123691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.875 [2024-07-13 22:20:36.123724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.875 qpair failed and we were unable to recover it. 00:37:16.875 [2024-07-13 22:20:36.123892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.875 [2024-07-13 22:20:36.123925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.875 qpair failed and we were unable to recover it. 00:37:16.875 [2024-07-13 22:20:36.124109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.875 [2024-07-13 22:20:36.124142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.875 qpair failed and we were unable to recover it. 00:37:16.875 [2024-07-13 22:20:36.124355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.875 [2024-07-13 22:20:36.124393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.875 qpair failed and we were unable to recover it. 00:37:16.875 [2024-07-13 22:20:36.124568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.875 [2024-07-13 22:20:36.124614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.875 qpair failed and we were unable to recover it. 00:37:16.875 [2024-07-13 22:20:36.124815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.875 [2024-07-13 22:20:36.124852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.875 qpair failed and we were unable to recover it. 00:37:16.875 [2024-07-13 22:20:36.125063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.875 [2024-07-13 22:20:36.125096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.875 qpair failed and we were unable to recover it. 00:37:16.875 [2024-07-13 22:20:36.125290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.875 [2024-07-13 22:20:36.125323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.875 qpair failed and we were unable to recover it. 00:37:16.875 [2024-07-13 22:20:36.125529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.875 [2024-07-13 22:20:36.125565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.875 qpair failed and we were unable to recover it. 00:37:16.875 [2024-07-13 22:20:36.125741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.875 [2024-07-13 22:20:36.125777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.875 qpair failed and we were unable to recover it. 00:37:16.875 [2024-07-13 22:20:36.125959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.875 [2024-07-13 22:20:36.125992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.875 qpair failed and we were unable to recover it. 00:37:16.875 [2024-07-13 22:20:36.126179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.875 [2024-07-13 22:20:36.126211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.875 qpair failed and we were unable to recover it. 00:37:16.875 [2024-07-13 22:20:36.126374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.875 [2024-07-13 22:20:36.126406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.875 qpair failed and we were unable to recover it. 00:37:16.875 [2024-07-13 22:20:36.126565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.875 [2024-07-13 22:20:36.126598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.875 qpair failed and we were unable to recover it. 00:37:16.875 [2024-07-13 22:20:36.126782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.875 [2024-07-13 22:20:36.126819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.875 qpair failed and we were unable to recover it. 00:37:16.875 [2024-07-13 22:20:36.126994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.875 [2024-07-13 22:20:36.127027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.875 qpair failed and we were unable to recover it. 00:37:16.875 [2024-07-13 22:20:36.127183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.875 [2024-07-13 22:20:36.127216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.875 qpair failed and we were unable to recover it. 00:37:16.875 [2024-07-13 22:20:36.127423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.875 [2024-07-13 22:20:36.127459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.875 qpair failed and we were unable to recover it. 00:37:16.875 [2024-07-13 22:20:36.127637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.876 [2024-07-13 22:20:36.127670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.876 qpair failed and we were unable to recover it. 00:37:16.876 [2024-07-13 22:20:36.127855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.876 [2024-07-13 22:20:36.127896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.876 qpair failed and we were unable to recover it. 00:37:16.876 [2024-07-13 22:20:36.128058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.876 [2024-07-13 22:20:36.128090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.876 qpair failed and we were unable to recover it. 00:37:16.876 [2024-07-13 22:20:36.128273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.876 [2024-07-13 22:20:36.128306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.876 qpair failed and we were unable to recover it. 00:37:16.876 [2024-07-13 22:20:36.128478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.876 [2024-07-13 22:20:36.128510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.876 qpair failed and we were unable to recover it. 00:37:16.876 [2024-07-13 22:20:36.128680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.876 [2024-07-13 22:20:36.128713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.876 qpair failed and we were unable to recover it. 00:37:16.876 [2024-07-13 22:20:36.128881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.876 [2024-07-13 22:20:36.128915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.876 qpair failed and we were unable to recover it. 00:37:16.876 [2024-07-13 22:20:36.129100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.876 [2024-07-13 22:20:36.129132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.876 qpair failed and we were unable to recover it. 00:37:16.876 [2024-07-13 22:20:36.129345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.876 [2024-07-13 22:20:36.129378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.876 qpair failed and we were unable to recover it. 00:37:16.876 [2024-07-13 22:20:36.129537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.876 [2024-07-13 22:20:36.129569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.876 qpair failed and we were unable to recover it. 00:37:16.876 [2024-07-13 22:20:36.129767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.876 [2024-07-13 22:20:36.129803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.876 qpair failed and we were unable to recover it. 00:37:16.876 [2024-07-13 22:20:36.130011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.876 [2024-07-13 22:20:36.130048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.876 qpair failed and we were unable to recover it. 00:37:16.876 [2024-07-13 22:20:36.130226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.876 [2024-07-13 22:20:36.130259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.876 qpair failed and we were unable to recover it. 00:37:16.876 [2024-07-13 22:20:36.130449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.876 [2024-07-13 22:20:36.130481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.876 qpair failed and we were unable to recover it. 00:37:16.876 [2024-07-13 22:20:36.130644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.876 [2024-07-13 22:20:36.130676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.876 qpair failed and we were unable to recover it. 00:37:16.876 [2024-07-13 22:20:36.130834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.876 [2024-07-13 22:20:36.130873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.876 qpair failed and we were unable to recover it. 00:37:16.876 [2024-07-13 22:20:36.131040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.876 [2024-07-13 22:20:36.131072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.876 qpair failed and we were unable to recover it. 00:37:16.876 [2024-07-13 22:20:36.131286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.876 [2024-07-13 22:20:36.131338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.876 qpair failed and we were unable to recover it. 00:37:16.876 [2024-07-13 22:20:36.131524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.876 [2024-07-13 22:20:36.131557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.876 qpair failed and we were unable to recover it. 00:37:16.876 [2024-07-13 22:20:36.131769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.876 [2024-07-13 22:20:36.131820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.876 qpair failed and we were unable to recover it. 00:37:16.876 [2024-07-13 22:20:36.132010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.876 [2024-07-13 22:20:36.132043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.876 qpair failed and we were unable to recover it. 00:37:16.876 [2024-07-13 22:20:36.132203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.876 [2024-07-13 22:20:36.132235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.876 qpair failed and we were unable to recover it. 00:37:16.876 [2024-07-13 22:20:36.132426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.876 [2024-07-13 22:20:36.132459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.876 qpair failed and we were unable to recover it. 00:37:16.877 [2024-07-13 22:20:36.132644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.877 [2024-07-13 22:20:36.132677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.877 qpair failed and we were unable to recover it. 00:37:16.877 [2024-07-13 22:20:36.132884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.877 [2024-07-13 22:20:36.132935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.877 qpair failed and we were unable to recover it. 00:37:16.877 [2024-07-13 22:20:36.133120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.877 [2024-07-13 22:20:36.133169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.877 qpair failed and we were unable to recover it. 00:37:16.877 [2024-07-13 22:20:36.133349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.877 [2024-07-13 22:20:36.133385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.877 qpair failed and we were unable to recover it. 00:37:16.877 [2024-07-13 22:20:36.133590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.877 [2024-07-13 22:20:36.133626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.877 qpair failed and we were unable to recover it. 00:37:16.877 [2024-07-13 22:20:36.133878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.877 [2024-07-13 22:20:36.133912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.877 qpair failed and we were unable to recover it. 00:37:16.877 [2024-07-13 22:20:36.134078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.877 [2024-07-13 22:20:36.134111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.877 qpair failed and we were unable to recover it. 00:37:16.877 [2024-07-13 22:20:36.134291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.877 [2024-07-13 22:20:36.134324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.877 qpair failed and we were unable to recover it. 00:37:16.877 [2024-07-13 22:20:36.134515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.877 [2024-07-13 22:20:36.134548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.877 qpair failed and we were unable to recover it. 00:37:16.877 [2024-07-13 22:20:36.134699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.877 [2024-07-13 22:20:36.134731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.877 qpair failed and we were unable to recover it. 00:37:16.877 [2024-07-13 22:20:36.134894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.877 [2024-07-13 22:20:36.134928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.877 qpair failed and we were unable to recover it. 00:37:16.877 [2024-07-13 22:20:36.135139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.877 [2024-07-13 22:20:36.135172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.877 qpair failed and we were unable to recover it. 00:37:16.877 [2024-07-13 22:20:36.135328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.877 [2024-07-13 22:20:36.135378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.877 qpair failed and we were unable to recover it. 00:37:16.877 [2024-07-13 22:20:36.135581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.877 [2024-07-13 22:20:36.135617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.877 qpair failed and we were unable to recover it. 00:37:16.877 [2024-07-13 22:20:36.135778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.877 [2024-07-13 22:20:36.135810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.877 qpair failed and we were unable to recover it. 00:37:16.877 [2024-07-13 22:20:36.135984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.877 [2024-07-13 22:20:36.136017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.877 qpair failed and we were unable to recover it. 00:37:16.877 [2024-07-13 22:20:36.136225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.877 [2024-07-13 22:20:36.136261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.877 qpair failed and we were unable to recover it. 00:37:16.877 [2024-07-13 22:20:36.136466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.877 [2024-07-13 22:20:36.136498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.877 qpair failed and we were unable to recover it. 00:37:16.877 [2024-07-13 22:20:36.136663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.877 [2024-07-13 22:20:36.136695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.877 qpair failed and we were unable to recover it. 00:37:16.877 [2024-07-13 22:20:36.136907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.877 [2024-07-13 22:20:36.136940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.877 qpair failed and we were unable to recover it. 00:37:16.877 [2024-07-13 22:20:36.137103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.877 [2024-07-13 22:20:36.137136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.877 qpair failed and we were unable to recover it. 00:37:16.877 [2024-07-13 22:20:36.137293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.877 [2024-07-13 22:20:36.137325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.877 qpair failed and we were unable to recover it. 00:37:16.877 [2024-07-13 22:20:36.137487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.877 [2024-07-13 22:20:36.137520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.877 qpair failed and we were unable to recover it. 00:37:16.877 [2024-07-13 22:20:36.137703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.877 [2024-07-13 22:20:36.137735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.877 qpair failed and we were unable to recover it. 00:37:16.877 [2024-07-13 22:20:36.137901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.877 [2024-07-13 22:20:36.137934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.878 qpair failed and we were unable to recover it. 00:37:16.878 [2024-07-13 22:20:36.138091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.878 [2024-07-13 22:20:36.138123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.878 qpair failed and we were unable to recover it. 00:37:16.878 [2024-07-13 22:20:36.138287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.878 [2024-07-13 22:20:36.138321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.878 qpair failed and we were unable to recover it. 00:37:16.878 [2024-07-13 22:20:36.138482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.878 [2024-07-13 22:20:36.138524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.878 qpair failed and we were unable to recover it. 00:37:16.878 [2024-07-13 22:20:36.138678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.878 [2024-07-13 22:20:36.138711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.878 qpair failed and we were unable to recover it. 00:37:16.878 [2024-07-13 22:20:36.138862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.878 [2024-07-13 22:20:36.138900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.878 qpair failed and we were unable to recover it. 00:37:16.878 [2024-07-13 22:20:36.139086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.878 [2024-07-13 22:20:36.139118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.878 qpair failed and we were unable to recover it. 00:37:16.878 [2024-07-13 22:20:36.139297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.878 [2024-07-13 22:20:36.139329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.878 qpair failed and we were unable to recover it. 00:37:16.878 [2024-07-13 22:20:36.139528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.878 [2024-07-13 22:20:36.139560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.878 qpair failed and we were unable to recover it. 00:37:16.878 [2024-07-13 22:20:36.139711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.878 [2024-07-13 22:20:36.139744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.878 qpair failed and we were unable to recover it. 00:37:16.878 [2024-07-13 22:20:36.139933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.878 [2024-07-13 22:20:36.139966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.878 qpair failed and we were unable to recover it. 00:37:16.878 [2024-07-13 22:20:36.140125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.878 [2024-07-13 22:20:36.140157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.878 qpair failed and we were unable to recover it. 00:37:16.878 [2024-07-13 22:20:36.140367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.878 [2024-07-13 22:20:36.140400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.878 qpair failed and we were unable to recover it. 00:37:16.878 [2024-07-13 22:20:36.140583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.878 [2024-07-13 22:20:36.140616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.878 qpair failed and we were unable to recover it. 00:37:16.878 [2024-07-13 22:20:36.140797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.878 [2024-07-13 22:20:36.140830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.879 qpair failed and we were unable to recover it. 00:37:16.879 [2024-07-13 22:20:36.141008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.879 [2024-07-13 22:20:36.141041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.879 qpair failed and we were unable to recover it. 00:37:16.879 [2024-07-13 22:20:36.141225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.879 [2024-07-13 22:20:36.141273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.879 qpair failed and we were unable to recover it. 00:37:16.879 [2024-07-13 22:20:36.141542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.879 [2024-07-13 22:20:36.141592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.879 qpair failed and we were unable to recover it. 00:37:16.879 [2024-07-13 22:20:36.141819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.879 [2024-07-13 22:20:36.141877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.879 qpair failed and we were unable to recover it. 00:37:16.879 [2024-07-13 22:20:36.142088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.879 [2024-07-13 22:20:36.142133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.879 qpair failed and we were unable to recover it. 00:37:16.879 [2024-07-13 22:20:36.142327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.879 [2024-07-13 22:20:36.142376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.879 qpair failed and we were unable to recover it. 00:37:16.879 [2024-07-13 22:20:36.142593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.879 [2024-07-13 22:20:36.142640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.879 qpair failed and we were unable to recover it. 00:37:16.879 [2024-07-13 22:20:36.142913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.879 [2024-07-13 22:20:36.142962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.879 qpair failed and we were unable to recover it. 00:37:16.879 [2024-07-13 22:20:36.143202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.879 [2024-07-13 22:20:36.143255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.879 qpair failed and we were unable to recover it. 00:37:16.879 [2024-07-13 22:20:36.143519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.879 [2024-07-13 22:20:36.143578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.879 qpair failed and we were unable to recover it. 00:37:16.879 [2024-07-13 22:20:36.143809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.879 [2024-07-13 22:20:36.143842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.879 qpair failed and we were unable to recover it. 00:37:16.879 [2024-07-13 22:20:36.144019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.879 [2024-07-13 22:20:36.144066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.879 qpair failed and we were unable to recover it. 00:37:16.879 [2024-07-13 22:20:36.144256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.879 [2024-07-13 22:20:36.144303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.879 qpair failed and we were unable to recover it. 00:37:16.879 [2024-07-13 22:20:36.144498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.879 [2024-07-13 22:20:36.144559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.879 qpair failed and we were unable to recover it. 00:37:16.879 [2024-07-13 22:20:36.144780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.879 [2024-07-13 22:20:36.144834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.879 qpair failed and we were unable to recover it. 00:37:16.879 [2024-07-13 22:20:36.145076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.879 [2024-07-13 22:20:36.145123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.879 qpair failed and we were unable to recover it. 00:37:16.879 [2024-07-13 22:20:36.145346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.879 [2024-07-13 22:20:36.145393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.879 qpair failed and we were unable to recover it. 00:37:16.879 [2024-07-13 22:20:36.145580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.879 [2024-07-13 22:20:36.145631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.879 qpair failed and we were unable to recover it. 00:37:16.880 [2024-07-13 22:20:36.145878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.880 [2024-07-13 22:20:36.145929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.880 qpair failed and we were unable to recover it. 00:37:16.880 [2024-07-13 22:20:36.146145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.880 [2024-07-13 22:20:36.146193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.880 qpair failed and we were unable to recover it. 00:37:16.880 [2024-07-13 22:20:36.146420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.880 [2024-07-13 22:20:36.146457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.880 qpair failed and we were unable to recover it. 00:37:16.880 [2024-07-13 22:20:36.146654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.880 [2024-07-13 22:20:36.146688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.880 qpair failed and we were unable to recover it. 00:37:16.880 [2024-07-13 22:20:36.146913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.880 [2024-07-13 22:20:36.146962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.880 qpair failed and we were unable to recover it. 00:37:16.880 [2024-07-13 22:20:36.147158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.880 [2024-07-13 22:20:36.147206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.880 qpair failed and we were unable to recover it. 00:37:16.880 [2024-07-13 22:20:36.147408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.880 [2024-07-13 22:20:36.147456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.880 qpair failed and we were unable to recover it. 00:37:16.880 [2024-07-13 22:20:36.147679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.880 [2024-07-13 22:20:36.147716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.880 qpair failed and we were unable to recover it. 00:37:16.880 [2024-07-13 22:20:36.147913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.880 [2024-07-13 22:20:36.147950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.880 qpair failed and we were unable to recover it. 00:37:16.880 [2024-07-13 22:20:36.148118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.880 [2024-07-13 22:20:36.148153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.880 qpair failed and we were unable to recover it. 00:37:16.880 [2024-07-13 22:20:36.148344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.880 [2024-07-13 22:20:36.148392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.880 qpair failed and we were unable to recover it. 00:37:16.880 [2024-07-13 22:20:36.148614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.880 [2024-07-13 22:20:36.148663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.880 qpair failed and we were unable to recover it. 00:37:16.880 [2024-07-13 22:20:36.148937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.880 [2024-07-13 22:20:36.148986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.880 qpair failed and we were unable to recover it. 00:37:16.880 [2024-07-13 22:20:36.149187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.880 [2024-07-13 22:20:36.149230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.880 qpair failed and we were unable to recover it. 00:37:16.880 [2024-07-13 22:20:36.149423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.880 [2024-07-13 22:20:36.149458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.880 qpair failed and we were unable to recover it. 00:37:16.880 [2024-07-13 22:20:36.149644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.880 [2024-07-13 22:20:36.149691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.880 qpair failed and we were unable to recover it. 00:37:16.880 [2024-07-13 22:20:36.149953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.880 [2024-07-13 22:20:36.150002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.880 qpair failed and we were unable to recover it. 00:37:16.880 [2024-07-13 22:20:36.150197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.880 [2024-07-13 22:20:36.150245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.880 qpair failed and we were unable to recover it. 00:37:16.880 [2024-07-13 22:20:36.150534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.880 [2024-07-13 22:20:36.150582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.880 qpair failed and we were unable to recover it. 00:37:16.880 [2024-07-13 22:20:36.150818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.880 [2024-07-13 22:20:36.150872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.880 qpair failed and we were unable to recover it. 00:37:16.880 [2024-07-13 22:20:36.151083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.880 [2024-07-13 22:20:36.151133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.880 qpair failed and we were unable to recover it. 00:37:16.880 [2024-07-13 22:20:36.151428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.880 [2024-07-13 22:20:36.151476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.880 qpair failed and we were unable to recover it. 00:37:16.880 [2024-07-13 22:20:36.151705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.880 [2024-07-13 22:20:36.151752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.880 qpair failed and we were unable to recover it. 00:37:16.880 [2024-07-13 22:20:36.151957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.880 [2024-07-13 22:20:36.152005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.880 qpair failed and we were unable to recover it. 00:37:16.880 [2024-07-13 22:20:36.152204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.880 [2024-07-13 22:20:36.152264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.880 qpair failed and we were unable to recover it. 00:37:16.881 [2024-07-13 22:20:36.152569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.881 [2024-07-13 22:20:36.152639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.881 qpair failed and we were unable to recover it. 00:37:16.881 [2024-07-13 22:20:36.152879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.881 [2024-07-13 22:20:36.152948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.881 qpair failed and we were unable to recover it. 00:37:16.881 [2024-07-13 22:20:36.153179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.881 [2024-07-13 22:20:36.153241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.881 qpair failed and we were unable to recover it. 00:37:16.881 [2024-07-13 22:20:36.153577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.881 [2024-07-13 22:20:36.153626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.881 qpair failed and we were unable to recover it. 00:37:16.881 [2024-07-13 22:20:36.153850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.881 [2024-07-13 22:20:36.153896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.881 qpair failed and we were unable to recover it. 00:37:16.881 [2024-07-13 22:20:36.154099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.881 [2024-07-13 22:20:36.154133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.881 qpair failed and we were unable to recover it. 00:37:16.881 [2024-07-13 22:20:36.154300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.881 [2024-07-13 22:20:36.154348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.881 qpair failed and we were unable to recover it. 00:37:16.881 [2024-07-13 22:20:36.154590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.881 [2024-07-13 22:20:36.154639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.881 qpair failed and we were unable to recover it. 00:37:16.881 [2024-07-13 22:20:36.154918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.881 [2024-07-13 22:20:36.154967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.881 qpair failed and we were unable to recover it. 00:37:16.881 [2024-07-13 22:20:36.155187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.881 [2024-07-13 22:20:36.155223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.881 qpair failed and we were unable to recover it. 00:37:16.881 [2024-07-13 22:20:36.155419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.881 [2024-07-13 22:20:36.155453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.881 qpair failed and we were unable to recover it. 00:37:16.881 [2024-07-13 22:20:36.155657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.881 [2024-07-13 22:20:36.155708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.881 qpair failed and we were unable to recover it. 00:37:16.881 [2024-07-13 22:20:36.155948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.881 [2024-07-13 22:20:36.156008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.881 qpair failed and we were unable to recover it. 00:37:16.881 [2024-07-13 22:20:36.156241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.881 [2024-07-13 22:20:36.156289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.881 qpair failed and we were unable to recover it. 00:37:16.881 [2024-07-13 22:20:36.156558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.881 [2024-07-13 22:20:36.156611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.881 qpair failed and we were unable to recover it. 00:37:16.881 [2024-07-13 22:20:36.156882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.881 [2024-07-13 22:20:36.156920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.881 qpair failed and we were unable to recover it. 00:37:16.881 [2024-07-13 22:20:36.157115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.881 [2024-07-13 22:20:36.157149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.881 qpair failed and we were unable to recover it. 00:37:16.881 [2024-07-13 22:20:36.157312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.881 [2024-07-13 22:20:36.157346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.881 qpair failed and we were unable to recover it. 00:37:16.881 [2024-07-13 22:20:36.157540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.881 [2024-07-13 22:20:36.157588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.881 qpair failed and we were unable to recover it. 00:37:16.881 [2024-07-13 22:20:36.157895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.881 [2024-07-13 22:20:36.157944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.881 qpair failed and we were unable to recover it. 00:37:16.881 [2024-07-13 22:20:36.158187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.881 [2024-07-13 22:20:36.158242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.881 qpair failed and we were unable to recover it. 00:37:16.881 [2024-07-13 22:20:36.158533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.881 [2024-07-13 22:20:36.158591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.881 qpair failed and we were unable to recover it. 00:37:16.881 [2024-07-13 22:20:36.158843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.882 [2024-07-13 22:20:36.158889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.882 qpair failed and we were unable to recover it. 00:37:16.882 [2024-07-13 22:20:36.159108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.882 [2024-07-13 22:20:36.159143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.882 qpair failed and we were unable to recover it. 00:37:16.882 [2024-07-13 22:20:36.159337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.882 [2024-07-13 22:20:36.159385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:16.882 qpair failed and we were unable to recover it. 00:37:16.882 [2024-07-13 22:20:36.159620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.882 [2024-07-13 22:20:36.159666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.882 qpair failed and we were unable to recover it. 00:37:16.882 [2024-07-13 22:20:36.159873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.882 [2024-07-13 22:20:36.159922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.882 qpair failed and we were unable to recover it. 00:37:16.882 [2024-07-13 22:20:36.160146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.882 [2024-07-13 22:20:36.160206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.882 qpair failed and we were unable to recover it. 00:37:16.882 [2024-07-13 22:20:36.160470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.882 [2024-07-13 22:20:36.160518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.882 qpair failed and we were unable to recover it. 00:37:16.882 [2024-07-13 22:20:36.160795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.882 [2024-07-13 22:20:36.160835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.882 qpair failed and we were unable to recover it. 00:37:16.882 [2024-07-13 22:20:36.161097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.882 [2024-07-13 22:20:36.161144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.882 qpair failed and we were unable to recover it. 00:37:16.882 [2024-07-13 22:20:36.161350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.882 [2024-07-13 22:20:36.161385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.882 qpair failed and we were unable to recover it. 00:37:16.882 [2024-07-13 22:20:36.161578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.882 [2024-07-13 22:20:36.161611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.882 qpair failed and we were unable to recover it. 00:37:16.882 [2024-07-13 22:20:36.161800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.882 [2024-07-13 22:20:36.161832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.882 qpair failed and we were unable to recover it. 00:37:16.882 [2024-07-13 22:20:36.162013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.882 [2024-07-13 22:20:36.162048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.882 qpair failed and we were unable to recover it. 00:37:16.882 [2024-07-13 22:20:36.162313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.882 [2024-07-13 22:20:36.162346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.882 qpair failed and we were unable to recover it. 00:37:16.882 [2024-07-13 22:20:36.162681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.882 [2024-07-13 22:20:36.162743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.882 qpair failed and we were unable to recover it. 00:37:16.882 [2024-07-13 22:20:36.162977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.882 [2024-07-13 22:20:36.163011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:16.882 qpair failed and we were unable to recover it. 00:37:16.882 [2024-07-13 22:20:36.163225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.882 [2024-07-13 22:20:36.163277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.882 qpair failed and we were unable to recover it. 00:37:16.882 [2024-07-13 22:20:36.163492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.882 [2024-07-13 22:20:36.163527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.882 qpair failed and we were unable to recover it. 00:37:16.882 [2024-07-13 22:20:36.163923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.882 [2024-07-13 22:20:36.163961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.882 qpair failed and we were unable to recover it. 00:37:16.882 [2024-07-13 22:20:36.164166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.882 [2024-07-13 22:20:36.164201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.882 qpair failed and we were unable to recover it. 00:37:16.882 [2024-07-13 22:20:36.164379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.882 [2024-07-13 22:20:36.164415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.882 qpair failed and we were unable to recover it. 00:37:16.882 [2024-07-13 22:20:36.164629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.883 [2024-07-13 22:20:36.164663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.883 qpair failed and we were unable to recover it. 00:37:16.883 [2024-07-13 22:20:36.164863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.883 [2024-07-13 22:20:36.164910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.883 qpair failed and we were unable to recover it. 00:37:16.883 [2024-07-13 22:20:36.165092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.883 [2024-07-13 22:20:36.165124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.883 qpair failed and we were unable to recover it. 00:37:16.883 [2024-07-13 22:20:36.165334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.883 [2024-07-13 22:20:36.165367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.883 qpair failed and we were unable to recover it. 00:37:16.883 [2024-07-13 22:20:36.165565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.883 [2024-07-13 22:20:36.165596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.883 qpair failed and we were unable to recover it. 00:37:16.883 [2024-07-13 22:20:36.165854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.883 [2024-07-13 22:20:36.165913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.883 qpair failed and we were unable to recover it. 00:37:16.883 [2024-07-13 22:20:36.166104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.883 [2024-07-13 22:20:36.166136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.883 qpair failed and we were unable to recover it. 00:37:16.883 [2024-07-13 22:20:36.166353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.883 [2024-07-13 22:20:36.166389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.883 qpair failed and we were unable to recover it. 00:37:16.883 [2024-07-13 22:20:36.166630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.883 [2024-07-13 22:20:36.166666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.883 qpair failed and we were unable to recover it. 00:37:16.883 [2024-07-13 22:20:36.166893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.883 [2024-07-13 22:20:36.166929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.883 qpair failed and we were unable to recover it. 00:37:16.883 [2024-07-13 22:20:36.167129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.883 [2024-07-13 22:20:36.167164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.883 qpair failed and we were unable to recover it. 00:37:16.883 [2024-07-13 22:20:36.167360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.883 [2024-07-13 22:20:36.167396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.883 qpair failed and we were unable to recover it. 00:37:16.883 [2024-07-13 22:20:36.167593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.883 [2024-07-13 22:20:36.167624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.883 qpair failed and we were unable to recover it. 00:37:16.883 [2024-07-13 22:20:36.167882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.883 [2024-07-13 22:20:36.167918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.883 qpair failed and we were unable to recover it. 00:37:16.883 [2024-07-13 22:20:36.168127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.883 [2024-07-13 22:20:36.168163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.883 qpair failed and we were unable to recover it. 00:37:16.883 [2024-07-13 22:20:36.168394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.883 [2024-07-13 22:20:36.168426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.883 qpair failed and we were unable to recover it. 00:37:16.883 [2024-07-13 22:20:36.168608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.883 [2024-07-13 22:20:36.168640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.883 qpair failed and we were unable to recover it. 00:37:16.883 [2024-07-13 22:20:36.168886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.883 [2024-07-13 22:20:36.168923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.883 qpair failed and we were unable to recover it. 00:37:16.883 [2024-07-13 22:20:36.169120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.883 [2024-07-13 22:20:36.169155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.883 qpair failed and we were unable to recover it. 00:37:16.883 [2024-07-13 22:20:36.169363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.883 [2024-07-13 22:20:36.169398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.883 qpair failed and we were unable to recover it. 00:37:16.883 [2024-07-13 22:20:36.169589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.883 [2024-07-13 22:20:36.169621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.883 qpair failed and we were unable to recover it. 00:37:16.883 [2024-07-13 22:20:36.169877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.883 [2024-07-13 22:20:36.169929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.883 qpair failed and we were unable to recover it. 00:37:16.883 [2024-07-13 22:20:36.170162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.883 [2024-07-13 22:20:36.170214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.884 qpair failed and we were unable to recover it. 00:37:16.884 [2024-07-13 22:20:36.170418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.884 [2024-07-13 22:20:36.170454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.884 qpair failed and we were unable to recover it. 00:37:16.884 [2024-07-13 22:20:36.170687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.884 [2024-07-13 22:20:36.170719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.884 qpair failed and we were unable to recover it. 00:37:16.884 [2024-07-13 22:20:36.170941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.884 [2024-07-13 22:20:36.170974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.884 qpair failed and we were unable to recover it. 00:37:16.884 [2024-07-13 22:20:36.171132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.884 [2024-07-13 22:20:36.171163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.884 qpair failed and we were unable to recover it. 00:37:16.884 [2024-07-13 22:20:36.171318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.884 [2024-07-13 22:20:36.171350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.884 qpair failed and we were unable to recover it. 00:37:16.884 [2024-07-13 22:20:36.171538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.884 [2024-07-13 22:20:36.171570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.884 qpair failed and we were unable to recover it. 00:37:16.884 [2024-07-13 22:20:36.171756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.884 [2024-07-13 22:20:36.171792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.884 qpair failed and we were unable to recover it. 00:37:16.884 [2024-07-13 22:20:36.171977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.884 [2024-07-13 22:20:36.172014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.884 qpair failed and we were unable to recover it. 00:37:16.884 [2024-07-13 22:20:36.172205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.884 [2024-07-13 22:20:36.172259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.884 qpair failed and we were unable to recover it. 00:37:16.884 [2024-07-13 22:20:36.172461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.884 [2024-07-13 22:20:36.172493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.884 qpair failed and we were unable to recover it. 00:37:16.884 [2024-07-13 22:20:36.172703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.884 [2024-07-13 22:20:36.172739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.884 qpair failed and we were unable to recover it. 00:37:16.884 [2024-07-13 22:20:36.172940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.884 [2024-07-13 22:20:36.172972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.884 qpair failed and we were unable to recover it. 00:37:16.884 [2024-07-13 22:20:36.173161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.884 [2024-07-13 22:20:36.173213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.884 qpair failed and we were unable to recover it. 00:37:16.884 [2024-07-13 22:20:36.173443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.884 [2024-07-13 22:20:36.173475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.884 qpair failed and we were unable to recover it. 00:37:16.884 [2024-07-13 22:20:36.173691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.884 [2024-07-13 22:20:36.173727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.884 qpair failed and we were unable to recover it. 00:37:16.884 [2024-07-13 22:20:36.173970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.884 [2024-07-13 22:20:36.174003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.884 qpair failed and we were unable to recover it. 00:37:16.884 [2024-07-13 22:20:36.174205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.884 [2024-07-13 22:20:36.174241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.884 qpair failed and we were unable to recover it. 00:37:16.884 [2024-07-13 22:20:36.174425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.884 [2024-07-13 22:20:36.174456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.884 qpair failed and we were unable to recover it. 00:37:16.884 [2024-07-13 22:20:36.174660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.884 [2024-07-13 22:20:36.174696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.884 qpair failed and we were unable to recover it. 00:37:16.884 [2024-07-13 22:20:36.174904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.884 [2024-07-13 22:20:36.174938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.884 qpair failed and we were unable to recover it. 00:37:16.884 [2024-07-13 22:20:36.175103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.884 [2024-07-13 22:20:36.175135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.884 qpair failed and we were unable to recover it. 00:37:16.884 [2024-07-13 22:20:36.175330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.884 [2024-07-13 22:20:36.175363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.884 qpair failed and we were unable to recover it. 00:37:16.884 [2024-07-13 22:20:36.175601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.884 [2024-07-13 22:20:36.175655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.884 qpair failed and we were unable to recover it. 00:37:16.884 [2024-07-13 22:20:36.175889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.884 [2024-07-13 22:20:36.175926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.884 qpair failed and we were unable to recover it. 00:37:16.884 [2024-07-13 22:20:36.176114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.885 [2024-07-13 22:20:36.176156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.885 qpair failed and we were unable to recover it. 00:37:16.885 [2024-07-13 22:20:36.176316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.885 [2024-07-13 22:20:36.176347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.885 qpair failed and we were unable to recover it. 00:37:16.885 [2024-07-13 22:20:36.176685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.885 [2024-07-13 22:20:36.176749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.885 qpair failed and we were unable to recover it. 00:37:16.885 [2024-07-13 22:20:36.176966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.885 [2024-07-13 22:20:36.177003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.885 qpair failed and we were unable to recover it. 00:37:16.885 [2024-07-13 22:20:36.177174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.885 [2024-07-13 22:20:36.177210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.885 qpair failed and we were unable to recover it. 00:37:16.885 [2024-07-13 22:20:36.177388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.885 [2024-07-13 22:20:36.177420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.885 qpair failed and we were unable to recover it. 00:37:16.885 [2024-07-13 22:20:36.177682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.885 [2024-07-13 22:20:36.177738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.885 qpair failed and we were unable to recover it. 00:37:16.885 [2024-07-13 22:20:36.177952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.885 [2024-07-13 22:20:36.177988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.885 qpair failed and we were unable to recover it. 00:37:16.885 [2024-07-13 22:20:36.178190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.885 [2024-07-13 22:20:36.178226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.885 qpair failed and we were unable to recover it. 00:37:16.885 [2024-07-13 22:20:36.178434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.885 [2024-07-13 22:20:36.178466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.885 qpair failed and we were unable to recover it. 00:37:16.885 [2024-07-13 22:20:36.178692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.885 [2024-07-13 22:20:36.178725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.885 qpair failed and we were unable to recover it. 00:37:16.885 [2024-07-13 22:20:36.178963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.885 [2024-07-13 22:20:36.179000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.885 qpair failed and we were unable to recover it. 00:37:16.885 [2024-07-13 22:20:36.179240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.885 [2024-07-13 22:20:36.179275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.885 qpair failed and we were unable to recover it. 00:37:16.885 [2024-07-13 22:20:36.179479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.885 [2024-07-13 22:20:36.179511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.885 qpair failed and we were unable to recover it. 00:37:16.885 [2024-07-13 22:20:36.179727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.885 [2024-07-13 22:20:36.179763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.885 qpair failed and we were unable to recover it. 00:37:16.885 [2024-07-13 22:20:36.180011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.885 [2024-07-13 22:20:36.180047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.885 qpair failed and we were unable to recover it. 00:37:16.885 [2024-07-13 22:20:36.180251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.885 [2024-07-13 22:20:36.180286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.885 qpair failed and we were unable to recover it. 00:37:16.885 [2024-07-13 22:20:36.180494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.885 [2024-07-13 22:20:36.180526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.885 qpair failed and we were unable to recover it. 00:37:16.885 [2024-07-13 22:20:36.180717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.885 [2024-07-13 22:20:36.180753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.885 qpair failed and we were unable to recover it. 00:37:16.885 [2024-07-13 22:20:36.180979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.885 [2024-07-13 22:20:36.181015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.885 qpair failed and we were unable to recover it. 00:37:16.885 [2024-07-13 22:20:36.181213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.885 [2024-07-13 22:20:36.181248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.885 qpair failed and we were unable to recover it. 00:37:16.885 [2024-07-13 22:20:36.181448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.885 [2024-07-13 22:20:36.181480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.885 qpair failed and we were unable to recover it. 00:37:16.885 [2024-07-13 22:20:36.181707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.885 [2024-07-13 22:20:36.181740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.885 qpair failed and we were unable to recover it. 00:37:16.885 [2024-07-13 22:20:36.181954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.886 [2024-07-13 22:20:36.181989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.886 qpair failed and we were unable to recover it. 00:37:16.886 [2024-07-13 22:20:36.182214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.886 [2024-07-13 22:20:36.182250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.886 qpair failed and we were unable to recover it. 00:37:16.886 [2024-07-13 22:20:36.182452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.886 [2024-07-13 22:20:36.182484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.886 qpair failed and we were unable to recover it. 00:37:16.886 [2024-07-13 22:20:36.182713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.886 [2024-07-13 22:20:36.182745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.886 qpair failed and we were unable to recover it. 00:37:16.886 [2024-07-13 22:20:36.182980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.886 [2024-07-13 22:20:36.183017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.886 qpair failed and we were unable to recover it. 00:37:16.886 [2024-07-13 22:20:36.183246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.886 [2024-07-13 22:20:36.183286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.886 qpair failed and we were unable to recover it. 00:37:16.886 [2024-07-13 22:20:36.183514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.886 [2024-07-13 22:20:36.183546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.886 qpair failed and we were unable to recover it. 00:37:16.886 [2024-07-13 22:20:36.183765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.886 [2024-07-13 22:20:36.183801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.886 qpair failed and we were unable to recover it. 00:37:16.886 [2024-07-13 22:20:36.184016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.886 [2024-07-13 22:20:36.184052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.886 qpair failed and we were unable to recover it. 00:37:16.886 [2024-07-13 22:20:36.184283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.886 [2024-07-13 22:20:36.184319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.886 qpair failed and we were unable to recover it. 00:37:16.886 [2024-07-13 22:20:36.184503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.886 [2024-07-13 22:20:36.184536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.886 qpair failed and we were unable to recover it. 00:37:16.886 [2024-07-13 22:20:36.184727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.886 [2024-07-13 22:20:36.184763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.886 qpair failed and we were unable to recover it. 00:37:16.886 [2024-07-13 22:20:36.184949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.887 [2024-07-13 22:20:36.184985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.887 qpair failed and we were unable to recover it. 00:37:16.887 [2024-07-13 22:20:36.185188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.887 [2024-07-13 22:20:36.185223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.887 qpair failed and we were unable to recover it. 00:37:16.887 [2024-07-13 22:20:36.185428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.887 [2024-07-13 22:20:36.185460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.887 qpair failed and we were unable to recover it. 00:37:16.887 [2024-07-13 22:20:36.185642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.887 [2024-07-13 22:20:36.185678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.887 qpair failed and we were unable to recover it. 00:37:16.887 [2024-07-13 22:20:36.185890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.887 [2024-07-13 22:20:36.185941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.887 qpair failed and we were unable to recover it. 00:37:16.887 [2024-07-13 22:20:36.186123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.887 [2024-07-13 22:20:36.186172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.887 qpair failed and we were unable to recover it. 00:37:16.887 [2024-07-13 22:20:36.186376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.887 [2024-07-13 22:20:36.186409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.887 qpair failed and we were unable to recover it. 00:37:16.887 [2024-07-13 22:20:36.186635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.887 [2024-07-13 22:20:36.186692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.887 qpair failed and we were unable to recover it. 00:37:16.887 [2024-07-13 22:20:36.186930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.887 [2024-07-13 22:20:36.186967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.887 qpair failed and we were unable to recover it. 00:37:16.887 [2024-07-13 22:20:36.187174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.887 [2024-07-13 22:20:36.187210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.887 qpair failed and we were unable to recover it. 00:37:16.887 [2024-07-13 22:20:36.187417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.887 [2024-07-13 22:20:36.187450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.887 qpair failed and we were unable to recover it. 00:37:16.887 [2024-07-13 22:20:36.187725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.887 [2024-07-13 22:20:36.187782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.887 qpair failed and we were unable to recover it. 00:37:16.887 [2024-07-13 22:20:36.187998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.887 [2024-07-13 22:20:36.188034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.887 qpair failed and we were unable to recover it. 00:37:16.887 [2024-07-13 22:20:36.188211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.887 [2024-07-13 22:20:36.188247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.887 qpair failed and we were unable to recover it. 00:37:16.887 [2024-07-13 22:20:36.188431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.887 [2024-07-13 22:20:36.188463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.887 qpair failed and we were unable to recover it. 00:37:16.887 [2024-07-13 22:20:36.188678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.887 [2024-07-13 22:20:36.188713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.887 qpair failed and we were unable to recover it. 00:37:16.887 [2024-07-13 22:20:36.188892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.887 [2024-07-13 22:20:36.188928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.887 qpair failed and we were unable to recover it. 00:37:16.887 [2024-07-13 22:20:36.189098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.887 [2024-07-13 22:20:36.189134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.888 qpair failed and we were unable to recover it. 00:37:16.888 [2024-07-13 22:20:36.189315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.888 [2024-07-13 22:20:36.189347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.888 qpair failed and we were unable to recover it. 00:37:16.888 [2024-07-13 22:20:36.189509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.888 [2024-07-13 22:20:36.189564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.888 qpair failed and we were unable to recover it. 00:37:16.888 [2024-07-13 22:20:36.189798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.888 [2024-07-13 22:20:36.189835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.888 qpair failed and we were unable to recover it. 00:37:16.888 [2024-07-13 22:20:36.190029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.888 [2024-07-13 22:20:36.190063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.888 qpair failed and we were unable to recover it. 00:37:16.888 [2024-07-13 22:20:36.190249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.888 [2024-07-13 22:20:36.190282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.888 qpair failed and we were unable to recover it. 00:37:16.888 [2024-07-13 22:20:36.190556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.888 [2024-07-13 22:20:36.190618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.888 qpair failed and we were unable to recover it. 00:37:16.888 [2024-07-13 22:20:36.190823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.888 [2024-07-13 22:20:36.190858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.888 qpair failed and we were unable to recover it. 00:37:16.888 [2024-07-13 22:20:36.191074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.888 [2024-07-13 22:20:36.191110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.888 qpair failed and we were unable to recover it. 00:37:16.888 [2024-07-13 22:20:36.191323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.888 [2024-07-13 22:20:36.191356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.888 qpair failed and we were unable to recover it. 00:37:16.888 [2024-07-13 22:20:36.191629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.888 [2024-07-13 22:20:36.191690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.888 qpair failed and we were unable to recover it. 00:37:16.888 [2024-07-13 22:20:36.191896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.888 [2024-07-13 22:20:36.191932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.888 qpair failed and we were unable to recover it. 00:37:16.888 [2024-07-13 22:20:36.192138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.888 [2024-07-13 22:20:36.192185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.888 qpair failed and we were unable to recover it. 00:37:16.888 [2024-07-13 22:20:36.192425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.888 [2024-07-13 22:20:36.192457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.888 qpair failed and we were unable to recover it. 00:37:16.888 [2024-07-13 22:20:36.192646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.888 [2024-07-13 22:20:36.192682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.888 qpair failed and we were unable to recover it. 00:37:16.888 [2024-07-13 22:20:36.192917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.888 [2024-07-13 22:20:36.192949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.888 qpair failed and we were unable to recover it. 00:37:16.888 [2024-07-13 22:20:36.193157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.888 [2024-07-13 22:20:36.193197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.888 qpair failed and we were unable to recover it. 00:37:16.888 [2024-07-13 22:20:36.193430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.888 [2024-07-13 22:20:36.193462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.888 qpair failed and we were unable to recover it. 00:37:16.888 [2024-07-13 22:20:36.193756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.888 [2024-07-13 22:20:36.193818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.888 qpair failed and we were unable to recover it. 00:37:16.888 [2024-07-13 22:20:36.194066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.888 [2024-07-13 22:20:36.194102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.888 qpair failed and we were unable to recover it. 00:37:16.888 [2024-07-13 22:20:36.194328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.888 [2024-07-13 22:20:36.194364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.888 qpair failed and we were unable to recover it. 00:37:16.888 [2024-07-13 22:20:36.194568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.888 [2024-07-13 22:20:36.194600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.888 qpair failed and we were unable to recover it. 00:37:16.888 [2024-07-13 22:20:36.194817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.888 [2024-07-13 22:20:36.194853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.888 qpair failed and we were unable to recover it. 00:37:16.888 [2024-07-13 22:20:36.195065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.888 [2024-07-13 22:20:36.195101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.888 qpair failed and we were unable to recover it. 00:37:16.888 [2024-07-13 22:20:36.195328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.888 [2024-07-13 22:20:36.195363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.889 qpair failed and we were unable to recover it. 00:37:16.889 [2024-07-13 22:20:36.195558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.889 [2024-07-13 22:20:36.195590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.889 qpair failed and we were unable to recover it. 00:37:16.889 [2024-07-13 22:20:36.195754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.889 [2024-07-13 22:20:36.195787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.889 qpair failed and we were unable to recover it. 00:37:16.889 [2024-07-13 22:20:36.196006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.889 [2024-07-13 22:20:36.196043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.889 qpair failed and we were unable to recover it. 00:37:16.889 [2024-07-13 22:20:36.196274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.889 [2024-07-13 22:20:36.196310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.889 qpair failed and we were unable to recover it. 00:37:16.889 [2024-07-13 22:20:36.196496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.889 [2024-07-13 22:20:36.196528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.889 qpair failed and we were unable to recover it. 00:37:16.889 [2024-07-13 22:20:36.196784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.889 [2024-07-13 22:20:36.196821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.889 qpair failed and we were unable to recover it. 00:37:16.889 [2024-07-13 22:20:36.197019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.889 [2024-07-13 22:20:36.197056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.889 qpair failed and we were unable to recover it. 00:37:16.889 [2024-07-13 22:20:36.197287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.889 [2024-07-13 22:20:36.197323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.889 qpair failed and we were unable to recover it. 00:37:16.889 [2024-07-13 22:20:36.197530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.889 [2024-07-13 22:20:36.197562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.889 qpair failed and we were unable to recover it. 00:37:16.889 [2024-07-13 22:20:36.197778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.889 [2024-07-13 22:20:36.197814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.889 qpair failed and we were unable to recover it. 00:37:16.889 [2024-07-13 22:20:36.197996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.889 [2024-07-13 22:20:36.198033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.889 qpair failed and we were unable to recover it. 00:37:16.889 [2024-07-13 22:20:36.198240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.889 [2024-07-13 22:20:36.198272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.889 qpair failed and we were unable to recover it. 00:37:16.889 [2024-07-13 22:20:36.198428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.889 [2024-07-13 22:20:36.198460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.889 qpair failed and we were unable to recover it. 00:37:16.889 [2024-07-13 22:20:36.198649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.889 [2024-07-13 22:20:36.198681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.889 qpair failed and we were unable to recover it. 00:37:16.889 [2024-07-13 22:20:36.198870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.889 [2024-07-13 22:20:36.198903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.889 qpair failed and we were unable to recover it. 00:37:16.889 [2024-07-13 22:20:36.199124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.889 [2024-07-13 22:20:36.199159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.889 qpair failed and we were unable to recover it. 00:37:16.889 [2024-07-13 22:20:36.199363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.889 [2024-07-13 22:20:36.199395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.889 qpair failed and we were unable to recover it. 00:37:16.889 [2024-07-13 22:20:36.199637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.889 [2024-07-13 22:20:36.199694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.889 qpair failed and we were unable to recover it. 00:37:16.889 [2024-07-13 22:20:36.199914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.889 [2024-07-13 22:20:36.199947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.889 qpair failed and we were unable to recover it. 00:37:16.889 [2024-07-13 22:20:36.200136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.889 [2024-07-13 22:20:36.200168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.889 qpair failed and we were unable to recover it. 00:37:16.889 [2024-07-13 22:20:36.200348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.889 [2024-07-13 22:20:36.200380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.889 qpair failed and we were unable to recover it. 00:37:16.889 [2024-07-13 22:20:36.200642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.889 [2024-07-13 22:20:36.200697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.889 qpair failed and we were unable to recover it. 00:37:16.889 [2024-07-13 22:20:36.200909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.889 [2024-07-13 22:20:36.200941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.889 qpair failed and we were unable to recover it. 00:37:16.889 [2024-07-13 22:20:36.201173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.890 [2024-07-13 22:20:36.201209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.890 qpair failed and we were unable to recover it. 00:37:16.890 [2024-07-13 22:20:36.201445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.890 [2024-07-13 22:20:36.201477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.890 qpair failed and we were unable to recover it. 00:37:16.890 [2024-07-13 22:20:36.201686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.890 [2024-07-13 22:20:36.201722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.890 qpair failed and we were unable to recover it. 00:37:16.890 [2024-07-13 22:20:36.201948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.890 [2024-07-13 22:20:36.201985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.890 qpair failed and we were unable to recover it. 00:37:16.890 [2024-07-13 22:20:36.202214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.890 [2024-07-13 22:20:36.202250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.890 qpair failed and we were unable to recover it. 00:37:16.890 [2024-07-13 22:20:36.202455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.890 [2024-07-13 22:20:36.202487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.890 qpair failed and we were unable to recover it. 00:37:16.890 [2024-07-13 22:20:36.202693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.890 [2024-07-13 22:20:36.202726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.890 qpair failed and we were unable to recover it. 00:37:16.890 [2024-07-13 22:20:36.202941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.890 [2024-07-13 22:20:36.202991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.890 qpair failed and we were unable to recover it. 00:37:16.890 [2024-07-13 22:20:36.203218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.890 [2024-07-13 22:20:36.203258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.890 qpair failed and we were unable to recover it. 00:37:16.890 [2024-07-13 22:20:36.203467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.890 [2024-07-13 22:20:36.203499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.890 qpair failed and we were unable to recover it. 00:37:16.890 [2024-07-13 22:20:36.203711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.890 [2024-07-13 22:20:36.203747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.890 qpair failed and we were unable to recover it. 00:37:16.890 [2024-07-13 22:20:36.203940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.890 [2024-07-13 22:20:36.203977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.890 qpair failed and we were unable to recover it. 00:37:16.890 [2024-07-13 22:20:36.204226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.890 [2024-07-13 22:20:36.204258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.890 qpair failed and we were unable to recover it. 00:37:16.890 [2024-07-13 22:20:36.204444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.890 [2024-07-13 22:20:36.204476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.890 qpair failed and we were unable to recover it. 00:37:16.890 [2024-07-13 22:20:36.204633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.890 [2024-07-13 22:20:36.204666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.890 qpair failed and we were unable to recover it. 00:37:16.890 [2024-07-13 22:20:36.204853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.890 [2024-07-13 22:20:36.204896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.890 qpair failed and we were unable to recover it. 00:37:16.890 [2024-07-13 22:20:36.205086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.890 [2024-07-13 22:20:36.205136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.890 qpair failed and we were unable to recover it. 00:37:16.890 [2024-07-13 22:20:36.205330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.890 [2024-07-13 22:20:36.205361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.890 qpair failed and we were unable to recover it. 00:37:16.890 [2024-07-13 22:20:36.205679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.890 [2024-07-13 22:20:36.205743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.890 qpair failed and we were unable to recover it. 00:37:16.890 [2024-07-13 22:20:36.205935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.890 [2024-07-13 22:20:36.205968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.890 qpair failed and we were unable to recover it. 00:37:16.890 [2024-07-13 22:20:36.206209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.890 [2024-07-13 22:20:36.206245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.890 qpair failed and we were unable to recover it. 00:37:16.890 [2024-07-13 22:20:36.206452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.890 [2024-07-13 22:20:36.206484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.890 qpair failed and we were unable to recover it. 00:37:16.890 [2024-07-13 22:20:36.206707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.890 [2024-07-13 22:20:36.206744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.890 qpair failed and we were unable to recover it. 00:37:16.890 [2024-07-13 22:20:36.206941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.891 [2024-07-13 22:20:36.206977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.891 qpair failed and we were unable to recover it. 00:37:16.891 [2024-07-13 22:20:36.207189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.891 [2024-07-13 22:20:36.207221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.891 qpair failed and we were unable to recover it. 00:37:16.891 [2024-07-13 22:20:36.207426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.891 [2024-07-13 22:20:36.207459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.891 qpair failed and we were unable to recover it. 00:37:16.891 [2024-07-13 22:20:36.207710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.891 [2024-07-13 22:20:36.207743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.891 qpair failed and we were unable to recover it. 00:37:16.891 [2024-07-13 22:20:36.207981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.891 [2024-07-13 22:20:36.208017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.891 qpair failed and we were unable to recover it. 00:37:16.891 [2024-07-13 22:20:36.208198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.891 [2024-07-13 22:20:36.208265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.891 qpair failed and we were unable to recover it. 00:37:16.891 [2024-07-13 22:20:36.208499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.891 [2024-07-13 22:20:36.208531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.891 qpair failed and we were unable to recover it. 00:37:16.891 [2024-07-13 22:20:36.208785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.891 [2024-07-13 22:20:36.208821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.891 qpair failed and we were unable to recover it. 00:37:16.891 [2024-07-13 22:20:36.209018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.891 [2024-07-13 22:20:36.209050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.891 qpair failed and we were unable to recover it. 00:37:16.891 [2024-07-13 22:20:36.209236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.891 [2024-07-13 22:20:36.209273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.891 qpair failed and we were unable to recover it. 00:37:16.891 [2024-07-13 22:20:36.209479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.891 [2024-07-13 22:20:36.209512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.891 qpair failed and we were unable to recover it. 00:37:16.891 [2024-07-13 22:20:36.209739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.891 [2024-07-13 22:20:36.209771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.891 qpair failed and we were unable to recover it. 00:37:16.891 [2024-07-13 22:20:36.209953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.891 [2024-07-13 22:20:36.209989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.891 qpair failed and we were unable to recover it. 00:37:16.891 [2024-07-13 22:20:36.210204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.891 [2024-07-13 22:20:36.210236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.891 qpair failed and we were unable to recover it. 00:37:16.891 [2024-07-13 22:20:36.210397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.891 [2024-07-13 22:20:36.210429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.891 qpair failed and we were unable to recover it. 00:37:16.891 [2024-07-13 22:20:36.210590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.891 [2024-07-13 22:20:36.210622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.891 qpair failed and we were unable to recover it. 00:37:16.891 [2024-07-13 22:20:36.210805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.891 [2024-07-13 22:20:36.210838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.891 qpair failed and we were unable to recover it. 00:37:16.891 [2024-07-13 22:20:36.211056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.891 [2024-07-13 22:20:36.211092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.891 qpair failed and we were unable to recover it. 00:37:16.891 [2024-07-13 22:20:36.211278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.891 [2024-07-13 22:20:36.211311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.891 qpair failed and we were unable to recover it. 00:37:16.891 [2024-07-13 22:20:36.211548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.891 [2024-07-13 22:20:36.211584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.891 qpair failed and we were unable to recover it. 00:37:16.891 [2024-07-13 22:20:36.211827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.891 [2024-07-13 22:20:36.211863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.891 qpair failed and we were unable to recover it. 00:37:16.891 [2024-07-13 22:20:36.212102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.891 [2024-07-13 22:20:36.212139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.891 qpair failed and we were unable to recover it. 00:37:16.891 [2024-07-13 22:20:36.212339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.891 [2024-07-13 22:20:36.212371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.891 qpair failed and we were unable to recover it. 00:37:16.891 [2024-07-13 22:20:36.212647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.891 [2024-07-13 22:20:36.212703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.891 qpair failed and we were unable to recover it. 00:37:16.891 [2024-07-13 22:20:36.212943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.891 [2024-07-13 22:20:36.212976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.891 qpair failed and we were unable to recover it. 00:37:16.891 [2024-07-13 22:20:36.213219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.891 [2024-07-13 22:20:36.213279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.891 qpair failed and we were unable to recover it. 00:37:16.891 [2024-07-13 22:20:36.213508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.891 [2024-07-13 22:20:36.213541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.891 qpair failed and we were unable to recover it. 00:37:16.891 [2024-07-13 22:20:36.213739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.892 [2024-07-13 22:20:36.213772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.892 qpair failed and we were unable to recover it. 00:37:16.892 [2024-07-13 22:20:36.214008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.892 [2024-07-13 22:20:36.214044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.892 qpair failed and we were unable to recover it. 00:37:16.892 [2024-07-13 22:20:36.214247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.892 [2024-07-13 22:20:36.214283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.892 qpair failed and we were unable to recover it. 00:37:16.892 [2024-07-13 22:20:36.214495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.892 [2024-07-13 22:20:36.214528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.892 qpair failed and we were unable to recover it. 00:37:16.892 [2024-07-13 22:20:36.214755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.892 [2024-07-13 22:20:36.214792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.892 qpair failed and we were unable to recover it. 00:37:16.892 [2024-07-13 22:20:36.214977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.892 [2024-07-13 22:20:36.215009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.892 qpair failed and we were unable to recover it. 00:37:16.892 [2024-07-13 22:20:36.215220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.892 [2024-07-13 22:20:36.215255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.892 qpair failed and we were unable to recover it. 00:37:16.892 [2024-07-13 22:20:36.215509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.892 [2024-07-13 22:20:36.215556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.892 qpair failed and we were unable to recover it. 00:37:16.892 [2024-07-13 22:20:36.215813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.892 [2024-07-13 22:20:36.215849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.892 qpair failed and we were unable to recover it. 00:37:16.892 [2024-07-13 22:20:36.216074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.892 [2024-07-13 22:20:36.216110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.892 qpair failed and we were unable to recover it. 00:37:16.892 [2024-07-13 22:20:36.216337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.892 [2024-07-13 22:20:36.216374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.892 qpair failed and we were unable to recover it. 00:37:16.892 [2024-07-13 22:20:36.216590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.892 [2024-07-13 22:20:36.216622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.892 qpair failed and we were unable to recover it. 00:37:16.892 [2024-07-13 22:20:36.216836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.892 [2024-07-13 22:20:36.216888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.892 qpair failed and we were unable to recover it. 00:37:16.892 [2024-07-13 22:20:36.217102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.892 [2024-07-13 22:20:36.217150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.892 qpair failed and we were unable to recover it. 00:37:16.892 [2024-07-13 22:20:36.217403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.892 [2024-07-13 22:20:36.217436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.892 qpair failed and we were unable to recover it. 00:37:16.892 [2024-07-13 22:20:36.217597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:16.892 [2024-07-13 22:20:36.217629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:16.892 qpair failed and we were unable to recover it. 00:37:17.172 [2024-07-13 22:20:36.217874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.172 [2024-07-13 22:20:36.217912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.172 qpair failed and we were unable to recover it. 00:37:17.172 [2024-07-13 22:20:36.218128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.172 [2024-07-13 22:20:36.218162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.172 qpair failed and we were unable to recover it. 00:37:17.172 [2024-07-13 22:20:36.218354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.172 [2024-07-13 22:20:36.218386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.172 qpair failed and we were unable to recover it. 00:37:17.172 [2024-07-13 22:20:36.218579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.172 [2024-07-13 22:20:36.218611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.172 qpair failed and we were unable to recover it. 00:37:17.172 [2024-07-13 22:20:36.218775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.172 [2024-07-13 22:20:36.218825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.172 qpair failed and we were unable to recover it. 00:37:17.172 [2024-07-13 22:20:36.219018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.172 [2024-07-13 22:20:36.219065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.172 qpair failed and we were unable to recover it. 00:37:17.172 [2024-07-13 22:20:36.219291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.172 [2024-07-13 22:20:36.219323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.172 qpair failed and we were unable to recover it. 00:37:17.172 [2024-07-13 22:20:36.219546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.172 [2024-07-13 22:20:36.219579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.172 qpair failed and we were unable to recover it. 00:37:17.172 [2024-07-13 22:20:36.219803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.172 [2024-07-13 22:20:36.219841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.172 qpair failed and we were unable to recover it. 00:37:17.172 [2024-07-13 22:20:36.220064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.172 [2024-07-13 22:20:36.220105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.172 qpair failed and we were unable to recover it. 00:37:17.172 [2024-07-13 22:20:36.220339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.172 [2024-07-13 22:20:36.220392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.172 qpair failed and we were unable to recover it. 00:37:17.172 [2024-07-13 22:20:36.220603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.172 [2024-07-13 22:20:36.220636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.172 qpair failed and we were unable to recover it. 00:37:17.172 [2024-07-13 22:20:36.220827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.172 [2024-07-13 22:20:36.220863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.172 qpair failed and we were unable to recover it. 00:37:17.172 [2024-07-13 22:20:36.221075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.172 [2024-07-13 22:20:36.221109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.172 qpair failed and we were unable to recover it. 00:37:17.172 [2024-07-13 22:20:36.221289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.172 [2024-07-13 22:20:36.221323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.172 qpair failed and we were unable to recover it. 00:37:17.172 [2024-07-13 22:20:36.221483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.172 [2024-07-13 22:20:36.221516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.172 qpair failed and we were unable to recover it. 00:37:17.172 [2024-07-13 22:20:36.221691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.172 [2024-07-13 22:20:36.221735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.221970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.222007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.222235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.222271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.222502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.222535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.222759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.222796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.222986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.223022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.223232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.223273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.223454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.223486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.223694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.223730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.223912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.223949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.224156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.224206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.224441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.224474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.224688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.224724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.224961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.224997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.225180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.225215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.225391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.225423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.225589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.225622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.225874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.225911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.226111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.226146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.226382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.226414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.226689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.226749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.226954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.226990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.227162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.227198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.227410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.227442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.227614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.227647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.227880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.227916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.228125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.228160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.228344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.228378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.228653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.228707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.228910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.228946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.229166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.229198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.229385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.229417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.229595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.229628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.229794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.229827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.229990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.230023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.230176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.230208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.230395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.230428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.230603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.230639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.230811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.230848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.231085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.231118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.231399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.173 [2024-07-13 22:20:36.231455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.173 qpair failed and we were unable to recover it. 00:37:17.173 [2024-07-13 22:20:36.231648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.231680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.231894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.231930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.232149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.232182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.232399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.232455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.232664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.232700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.232912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.232949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.233162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.233195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.233444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.233477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.233727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.233764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.233990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.234027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.234232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.234265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.234472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.234508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.234703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.234738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.234951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.234985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.235143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.235175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.235355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.235391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.235600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.235636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.235842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.235886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.236097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.236129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.236374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.236433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.236663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.236698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.236902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.236939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.237152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.237184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.237434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.237471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.237649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.237684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.237915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.237947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.238140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.238172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.238373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.238405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.238619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.238655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.238878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.238914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.239125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.239156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.239397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.239433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.239644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.239680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.239881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.239928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.240132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.240165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.240434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.240492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.240698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.240734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.240941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.240977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.241162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.241195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.241475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.241532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.174 [2024-07-13 22:20:36.241735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.174 [2024-07-13 22:20:36.241770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.174 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.241997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.242033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.242243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.242275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.242537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.242595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.242848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.242886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.243121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.243162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.243368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.243400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.243619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.243679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.243897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.243930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.244113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.244163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.244403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.244436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.244709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.244766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.244997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.245033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.245235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.245270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.245451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.245484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.245748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.245805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.246057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.246090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.246332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.246368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.246553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.246585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.246794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.246830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.247053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.247089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.247292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.247328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.247560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.247592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.247838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.247880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.248107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.248143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.248345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.248380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.248567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.248599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.248807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.248843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.249055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.249091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.249297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.249333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.249568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.249600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.249821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.249857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.250059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.250096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.250271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.250309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.250542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.250575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.250781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.250817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.251061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.251093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.251301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.251336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.251547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.251579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.251794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.251830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.252043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.175 [2024-07-13 22:20:36.252079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.175 qpair failed and we were unable to recover it. 00:37:17.175 [2024-07-13 22:20:36.252300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.252332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.252518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.252550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.252728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.252764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.253011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.253044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.253249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.253290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.253524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.253556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.253776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.253808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.254045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.254081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.254287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.254323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.254528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.254560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.254748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.254783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.255020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.255056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.255298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.255334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.255574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.255607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.255864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.255903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.256113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.256148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.256388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.256430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.256642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.256675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.256883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.256934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.257096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.257128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.257336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.257373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.257582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.257615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.257859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.257901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.258113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.258148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.258355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.258391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.258601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.258634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.258890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.258926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.259128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.259164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.259392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.259424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.259605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.259637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.176 [2024-07-13 22:20:36.259856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.176 [2024-07-13 22:20:36.259897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.176 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.260143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.260179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.260360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.260397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.260578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.260610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.260770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.260802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.261044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.261078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.261285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.261321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.261551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.261584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.261814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.261851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.262052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.262088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.262271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.262308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.262524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.262556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.262749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.262782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.263008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.263044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.263244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.263284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.263492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.263524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.263761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.263797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.263980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.264017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.264230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.264263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.264423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.264456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.264686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.264743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.264984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.265021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.265253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.265285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.265494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.265526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.265766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.265798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.266007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.266058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.266287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.266323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.266527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.266559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.266778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.266814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.267049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.267082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.267241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.267274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.267430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.267462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.267651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.267687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.267894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.267945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.268131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.268163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.268350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.268382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.268651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.268712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.268914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.268950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.269151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.269188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.269418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.269450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.177 qpair failed and we were unable to recover it. 00:37:17.177 [2024-07-13 22:20:36.269702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.177 [2024-07-13 22:20:36.269759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.269970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.270008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.270212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.270248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.270457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.270489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.270709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.270745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.270923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.270959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.271194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.271231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.271411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.271443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.271664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.271700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.271886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.271923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.272125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.272172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.272385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.272417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.272637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.272669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.272888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.272940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.273170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.273206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.273396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.273428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.273581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.273614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.273846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.273892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.274091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.274127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.274362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.274393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.274610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.274646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.274851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.274898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.275135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.275171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.275403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.275435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.275710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.275772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.275993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.276029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.276240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.276272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.276457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.276491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.276785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.276845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.277069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.277101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.277261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.277294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.277510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.277542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.277726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.277758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.277943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.277981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.278188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.278224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.278432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.278464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.278678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.278713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.278939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.278972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.279161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.279194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.279352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.279384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.279568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.178 [2024-07-13 22:20:36.279600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.178 qpair failed and we were unable to recover it. 00:37:17.178 [2024-07-13 22:20:36.279835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.279879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.280126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.280161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.280363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.280396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.280557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.280589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.280759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.280794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.280991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.281028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.281259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.281291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.281501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.281537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.281743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.281779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.281966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.282002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.282203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.282236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.282499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.282555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.282758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.282794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.283025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.283066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.283273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.283305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.283568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.283625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.283853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.283892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.284115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.284153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.284358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.284390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.284663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.284720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.284930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.284966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.285196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.285231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.285444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.285476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.285751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.285806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.286025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.286060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.286265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.286300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.286498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.286530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.286747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.286782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.286981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.287017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.287249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.287282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.287496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.287528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.287719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.287764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.287956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.287993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.288201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.288245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.288428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.288460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.288664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.288700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.288904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.288955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.289187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.289223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.289435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.289466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.289722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.289775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.290013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.179 [2024-07-13 22:20:36.290047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.179 qpair failed and we were unable to recover it. 00:37:17.179 [2024-07-13 22:20:36.290254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.290290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.290478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.290510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.290697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.290731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.290887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.290939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.291177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.291213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.291420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.291453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.291640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.291676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.291849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.291893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.292128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.292164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.292341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.292373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.292675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.292736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.292961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.292998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.293203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.293243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.293463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.293495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.293653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.293685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.293859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.293900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.294075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.294111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.294315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.294347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.294616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.294673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.294891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.294927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.295153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.295190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.295391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.295424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.295646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.295678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.295854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.295892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.296102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.296138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.296371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.296403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.296678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.296740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.296944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.296980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.297185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.297220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.297427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.297459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.297769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.297825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.298045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.298082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.298317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.298349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.298563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.298594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.298782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.298818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.299012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.299048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.299252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.299287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.299488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.299520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.299689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.299721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.299902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.299938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.180 [2024-07-13 22:20:36.300129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.180 [2024-07-13 22:20:36.300161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.180 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.300339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.300371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.300575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.300640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.300847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.300890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.301098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.301129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.301329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.301361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.301512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.301545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.301774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.301810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.302047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.302085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.302295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.302327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.302537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.302573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.302755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.302791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.302996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.303034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.303244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.303276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.303535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.303592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.303830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.303872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.304112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.304155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.304316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.304349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.304625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.304682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.304890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.304926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.305127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.305163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.305372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.305405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.305670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.305730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.305949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.305981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.306169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.306201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.306394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.306426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.306609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.306697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.306916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.306948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.307106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.307138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.307346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.307378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.307696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.307758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.308004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.308037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.308199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.308231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.308422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.308455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.308628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.308661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.308846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.181 [2024-07-13 22:20:36.308883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.181 qpair failed and we were unable to recover it. 00:37:17.181 [2024-07-13 22:20:36.309078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.309110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.309298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.309330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.309650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.309713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.309934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.309970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.310139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.310189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.310375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.310408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.310655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.310687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.310887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.310939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.311134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.311170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.311374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.311407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.311684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.311740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.311966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.312003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.312231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.312266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.312501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.312533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.312747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.312783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.312984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.313020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.313200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.313236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.313488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.313520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.313757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.313790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.313975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.314008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.314233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.314268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.314506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.314538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.314778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.314814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.314998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.315034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.315210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.315247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.315427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.315460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.315656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.315716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.315938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.315972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.316214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.316250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.316486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.316519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.316702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.316737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.316923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.316959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.317147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.317180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.317364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.317397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.317636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.317692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.317862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.317904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.318107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.318142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.318357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.318389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.318620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.318679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.318913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.318945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.319156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.182 [2024-07-13 22:20:36.319193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.182 qpair failed and we were unable to recover it. 00:37:17.182 [2024-07-13 22:20:36.319399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.319431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.319591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.319623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.319811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.319852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.320065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.320113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.320346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.320378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.320631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.320689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.320904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.320937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.321124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.321157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.321374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.321407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.321629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.321662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.321848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.321892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.322072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.322104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.322292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.322324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.322586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.322643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.322849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.322891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.323108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.323140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.323359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.323392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.323599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.323635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.323820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.323857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.324084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.324121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.324358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.324390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.324600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.324637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.324849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.324894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.325108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.325140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.325299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.325331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.325583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.325641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.325839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.325882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.326087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.326123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.326332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.326364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.326654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.326710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.326914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.326951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.327156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.327192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.327370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.327402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.327626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.327662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.327879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.327915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.328116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.328152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.328362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.328394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.328613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.328668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.328883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.328919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.329109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.329145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.329355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.183 [2024-07-13 22:20:36.329387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.183 qpair failed and we were unable to recover it. 00:37:17.183 [2024-07-13 22:20:36.329537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.329570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.329780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.329820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.330060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.330097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.330301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.330334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.330571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.330627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.330815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.330851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.331054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.331086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.331281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.331318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.331524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.331560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.331734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.331770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.331974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.332012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.332255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.332288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.332547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.332613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.332819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.332855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.333066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.333098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.333265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.333298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.333513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.333549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.333797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.333829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.334024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.334057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.334269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.334301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.334505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.334542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.334721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.334757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.334975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.335009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.335167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.335200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.335470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.335528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.335748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.335780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.335940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.335983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.336149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.336182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.336368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.336400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.336635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.336672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.336909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.336945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.337157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.337189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.337507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.337571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.337753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.337788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.337972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.338008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.338218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.338252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.338512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.338568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.338771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.338807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.339003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.339038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.339233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.339265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.339507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.184 [2024-07-13 22:20:36.339561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.184 qpair failed and we were unable to recover it. 00:37:17.184 [2024-07-13 22:20:36.339765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.339806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.340012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.340045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.340211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.340243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.340488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.340544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.340777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.340810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.341004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.341038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.341226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.341259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.341509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.341545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.341749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.341785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.342017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.342054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.342270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.342303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.342520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.342557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.342775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.342807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.342975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.343008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.343178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.343210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.343362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.343394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.343583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.343617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.343848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.343922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.344143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.344190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.344526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.344606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.344877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.344917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.345102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.345139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.345321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.345354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.345537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.345585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.345842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.345900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.346176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.346230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.346458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.346491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.346658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.346691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.346864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.346932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.347150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.347197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.347440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.347486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.347705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.347740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.347902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.347936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.348127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.348174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.348369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.348417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.185 [2024-07-13 22:20:36.348670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.185 [2024-07-13 22:20:36.348717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.185 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.348979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.349032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.349275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.349310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.349529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.349562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.349751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.349798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.349998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.350050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.350243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.350290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.350512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.350561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.350764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.350797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.351008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.351044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.351240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.351293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.351505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.351553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.351774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.351826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.352070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.352109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.352319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.352369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.352559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.352594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.352782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.352828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.353034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.353082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.353284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.353347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.353596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.353632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.353797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.353829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.354086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.354134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.354384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.354430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.354666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.354713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.354981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.355027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.355238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.355274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.355458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.355490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.355673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.355719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.355941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.355989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.356247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.356293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.356507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.356542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.356707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.356740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.356951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.356990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.357238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.357285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.357554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.357604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.357834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.357886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.358084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.358118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.358283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.358315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.358523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.358571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.358795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.358840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.359084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.186 [2024-07-13 22:20:36.359136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.186 qpair failed and we were unable to recover it. 00:37:17.186 [2024-07-13 22:20:36.359374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.359413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.359629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.359661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.359818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.359860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.360078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.360124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.360347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.360400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.360618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.360666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.360878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.360912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.361100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.361133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.361321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.361369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.361592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.361640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.361859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.361926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.362117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.362157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.362347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.362380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.362566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.362612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.362875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.362923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.363170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.363217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.363404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.363439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.363666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.363699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.363861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.363927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.364152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.364199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.364466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.364518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.364759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.364799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.364998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.365032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.365232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.365264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.365443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.365491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.365711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.365758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.365984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.366032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.366290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.366329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.366541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.366578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.366761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.366812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.367121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.367183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.367407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.367445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.367725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.367779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.367996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.368044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.368243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.368279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.368506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.368560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.368756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.368790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.369007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.369043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.369233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.369286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.369646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.369681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.369880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.187 [2024-07-13 22:20:36.369916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.187 qpair failed and we were unable to recover it. 00:37:17.187 [2024-07-13 22:20:36.370135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.370185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.370393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.370451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.370757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.370832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.371104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.371173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.371366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.371418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.371611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.371675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.371875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.371910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.372166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.372218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.372439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.372492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.372783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.372818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.373034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.373073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.373304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.373356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.373575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.373631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.373842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.373894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.374105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.374139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.374383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.374435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.374660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.374713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.374923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.374959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.375184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.375240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.375464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.375506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.375691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.375725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.375930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.375983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.376214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.376265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.376495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.376552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.376740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.376788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.377066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.377119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.377341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.377405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.377664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.377717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.377884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.377922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.378137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.378214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.378407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.378458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.378677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.378711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.378886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.378922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.379165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.379218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.379466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.379501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.379703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.379738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.379960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.380018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.380223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.380287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.380508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.380559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.380799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.188 [2024-07-13 22:20:36.380833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.188 qpair failed and we were unable to recover it. 00:37:17.188 [2024-07-13 22:20:36.381062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.381114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.381334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.381387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.381598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.381649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.381819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.381860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.382112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.382176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.382396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.382447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.382653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.382715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.382930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.382985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.383171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.383224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.383449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.383483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.383673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.383710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.383883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.383931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.384141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.384200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.384374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.384409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.384603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.384638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.384839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.384889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.385080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.385159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.385418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.385480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.385673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.385707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.385897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.385952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.386161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.386214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.386398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.386450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.386647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.386681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.386895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.386930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.387149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.387210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.387434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.387485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.387689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.387740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.388005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.388058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.388272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.388325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.388523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.388579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.388753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.189 [2024-07-13 22:20:36.388790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.189 qpair failed and we were unable to recover it. 00:37:17.189 [2024-07-13 22:20:36.389009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.389060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.389258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.389310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.389501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.389557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.389722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.389755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.390001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.390059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.390327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.390379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.390535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.390570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.390763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.390798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.391021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.391067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.391305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.391356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.391538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.391573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.391791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.391825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.392038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.392077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.392311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.392362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.392542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.392604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.392815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.392851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.393061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.393114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.393354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.393406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.393623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.393674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.393846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.393894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.394089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.394140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.394366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.394419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.394635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.394700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.394911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.394963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.395189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.395241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.395428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.395482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.395649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.395685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.395892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.395951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.396198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.396250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.396433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.396485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.396647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.396680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.396892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.396927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.397156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.397208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.397406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.397459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.397652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.397702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.190 [2024-07-13 22:20:36.397985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.190 [2024-07-13 22:20:36.398046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.190 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.398283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.398337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.398513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.398548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.398742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.398784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.399023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.399081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.399275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.399308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.399484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.399519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.399715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.399756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.399963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.400016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.400238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.400291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.400508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.400563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.400778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.400811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.401028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.401080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.401328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.401364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.401586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.401648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.401817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.401856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.402075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.402127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.402343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.402400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.402610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.402661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.402861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.402903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.403088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.403152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.403394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.403446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.403682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.403734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.403940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.403995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.404233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.404285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.404482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.404534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.404698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.404739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.404955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.405005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.405201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.405236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.405427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.405465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.405654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.405690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.405935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.405975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.406185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.406251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.406445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.406512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.406712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.406746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.406965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.407018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.407995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.408034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.408246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.408298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.408515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.408569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.191 qpair failed and we were unable to recover it. 00:37:17.191 [2024-07-13 22:20:36.408760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.191 [2024-07-13 22:20:36.408795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.409046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.409099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.409306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.409357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.409577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.409630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.409796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.409832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.410068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.410121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.410338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.410393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.410591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.410646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.410841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.410897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.411082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.411134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.411330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.411381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.411623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.411674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.411841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.411891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.412102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.412163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.412375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.412426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.412652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.412686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.412874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.412909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.413083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.413117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.413338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.413394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.413592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.413643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.413863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.413904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.414095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.414128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.414321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.414374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.414613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.414663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.414856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.414905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.415093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.415126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.415344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.415396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.415603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.415654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.415840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.415880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.416088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.416121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.416334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.416385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.416605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.416638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.416809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.416843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.417097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.417153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.417368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.417418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.417658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.417709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.417954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.418005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.418217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.418265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.418510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.418562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.418773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.418816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.419059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.419111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.192 qpair failed and we were unable to recover it. 00:37:17.192 [2024-07-13 22:20:36.419305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.192 [2024-07-13 22:20:36.419356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.419547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.419599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.419791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.419824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.420025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.420076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.420319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.420370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.420591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.420642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.420828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.420861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.421059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.421110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.421301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.421353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.421571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.421623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.421819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.421859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.422076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.422126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.422379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.422430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.422663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.422713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.422925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.422976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.423220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.423270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.423517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.423568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.423739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.423777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.423973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.424024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.424202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.424252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.424485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.424535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.424758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.424792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.425018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.425068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.425273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.425323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.425545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.425595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.425770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.425803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.426016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.426066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.426272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.426322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.426543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.426593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.426786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.426819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.427042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.427094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.427318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.427369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.427578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.427628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.427857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.427897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.428094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.428156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.428329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.428378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.428618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.428668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.428829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.428862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.429080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.429130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.429309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.429358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.429599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.193 [2024-07-13 22:20:36.429649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.193 qpair failed and we were unable to recover it. 00:37:17.193 [2024-07-13 22:20:36.429856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.429896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.430077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.430127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.430373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.430422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.430601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.430650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.430857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.430904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.431095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.431129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.431321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.431354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.431518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.431550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.431743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.431775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.431946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.431979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.432157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.432190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.432345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.432378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.432575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.432609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.432796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.432830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.433021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.433055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.433230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.433263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.433492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.433525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.433717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.433750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.433939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.433973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.434134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.434169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.434387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.434421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.434605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.434648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.434860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.434898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.435087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.435118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.435336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.435368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.435533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.435564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.435751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.435786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.435980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.436013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.436179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.436214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.436401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.436434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.436606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.436638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.436857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.436895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.437090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.437124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.437344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.437377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.437571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.194 [2024-07-13 22:20:36.437603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.194 qpair failed and we were unable to recover it. 00:37:17.194 [2024-07-13 22:20:36.437817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.437856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.438053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.438086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.438272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.438305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.438511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.438544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.438722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.438754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.438965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.438998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.439164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.439197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.439409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.439442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.439604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.439641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.439808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.439841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.440048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.440082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.440306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.440339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.440530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.440562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.440731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.440763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.440992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.441024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.441213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.441245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.441429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.441462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.441631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.441664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.441843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.441885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.442043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.442076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.442242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.442274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.442467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.442500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.442694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.442727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.442886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.442920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.443133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.443166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.443362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.443395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.443581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.443613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.443803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.443834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.444067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.444100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.444269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.444301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.444462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.444496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.444681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.444716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.444891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.444925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.445118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.445150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.445306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.445337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.445552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.445585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.445770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.445803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.446010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.446043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.446197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.446231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.446393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.195 [2024-07-13 22:20:36.446425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.195 qpair failed and we were unable to recover it. 00:37:17.195 [2024-07-13 22:20:36.446582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.446614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.446787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.446821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.447039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.447073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.447227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.447260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.447467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.447499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.447696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.447728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.447912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.447945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.448103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.448135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.448296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.448335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.448535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.448571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.448762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.448805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.448997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.449032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.449239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.449272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.449438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.449471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.449661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.449693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.449906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.449938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.450148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.450179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.450346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.450379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.450561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.450593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.450780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.450813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.451024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.451059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.451242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.451276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.451465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.451499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.451686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.451719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.451905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.451937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.452165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.452197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.452356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.452387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.452602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.452635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.452826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.452858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.453059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.453093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.453312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.453364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.453604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.453655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.453874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.453908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.454078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.454111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.454349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.454400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.454561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.454596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.454808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.454841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.455053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.455087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.455300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.455354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.455566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.455617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.455829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.196 [2024-07-13 22:20:36.455862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.196 qpair failed and we were unable to recover it. 00:37:17.196 [2024-07-13 22:20:36.456045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.456077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.456260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.456293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.456538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.456588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.456750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.456782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.456999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.457033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.457257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.457307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.457520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.457569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.457782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.457819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.458017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.458050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.458234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.458268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.458456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.458491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.458704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.458737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.458931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.458964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.459128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.459160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.459350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.459400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.459564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.459598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.459765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.459798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.460015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.460048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.460240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.460273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.460487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.460538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.460728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.460762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.460958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.460993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.461158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.461189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.461378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.461412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.461603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.461636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.461810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.461845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.462035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.462069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.462256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.462306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.462559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.462609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.462820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.462853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.463035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.463068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.463251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.463303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.463554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.463606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.463766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.463828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.464082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.464133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.464350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.464385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.464614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.464647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.464806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.464841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.465036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.465086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.465330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.465381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.465596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.197 [2024-07-13 22:20:36.465646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.197 qpair failed and we were unable to recover it. 00:37:17.197 [2024-07-13 22:20:36.465874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.465907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.466109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.466143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.466327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.466361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.466559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.466593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.466781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.466815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.467033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.467066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.467365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.467437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.467662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.467712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.467932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.467983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.468234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.468289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.468594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.468656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.468851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.468891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.469061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.469096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.469314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.469364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.469608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.469658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.469853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.469892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.470062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.470095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.470310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.470361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.470581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.470615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.470783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.470816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.471037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.471072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.471283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.471334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.471633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.471692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.471927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.471978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.472231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.472282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.472492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.472541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.472710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.472744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.472963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.472997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.473201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.473234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.473454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.473487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.473707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.473740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.473925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.473975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.474192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.474241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.474460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.474494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.474705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.474738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.198 [2024-07-13 22:20:36.474942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.198 [2024-07-13 22:20:36.474992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.198 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.475217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.475268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.475510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.475561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.475756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.475788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.476025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.476075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.476324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.476374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.476592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.476625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.476838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.476880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.477065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.477116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.477343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.477392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.477600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.477651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.477832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.477874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.478065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.478114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.478363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.478414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.478658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.478709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.478998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.479048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.479268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.479319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.479539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.479590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.479769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.479802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.480014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.480064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.480313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.480363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.480578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.480640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.480855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.480894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.481098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.481149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.481369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.481421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.481668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.481719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.481933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.481983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.482184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.482234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.482445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.482494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.482705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.482738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.482948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.482998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.483180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.483229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.483408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.483457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.483669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.483703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.483899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.483949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.484147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.484196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.484362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.484396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.484577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.484610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.484780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.484812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.485022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.485070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.485274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.199 [2024-07-13 22:20:36.485307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.199 qpair failed and we were unable to recover it. 00:37:17.199 [2024-07-13 22:20:36.485495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.485547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.485707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.485740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.485963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.486014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.486202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.486252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.486515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.486564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.486779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.486812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.487020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.487072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.487320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.487372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.487592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.487625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.487835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.487875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.488052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.488108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.488309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.488359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.488570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.488621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.488832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.488873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.489121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.489154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.489364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.489414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.489609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.489660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.489817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.489850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.490039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.490091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.490308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.490358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.490598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.490648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.490838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.490879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.491043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.491077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.491297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.491347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.491570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.491604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.491817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.491850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.492085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.492136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.492352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.492404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.492582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.492633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.492820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.492852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.493030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.493062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.493334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.493387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.493640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.493690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.493880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.493914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.494084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.494116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.494341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.494391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.494628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.494680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.494888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.494921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.495140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.495172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.495360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.495410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.495654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.200 [2024-07-13 22:20:36.495704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.200 qpair failed and we were unable to recover it. 00:37:17.200 [2024-07-13 22:20:36.495911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.495944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.496148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.496196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.496431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.496468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.496665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.496726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.496957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.497008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.497237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.497288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.497470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.497524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.497738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.497771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.497974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.498026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.498221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.498278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.498507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.498556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.498724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.498756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.498962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.499011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.499204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.499254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.499470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.499520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.499710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.499742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.499981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.500032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.500218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.500268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.500499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.500550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.500741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.500773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.500982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.501033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.501288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.501339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.501556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.501608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.501786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.501820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.502065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.502116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.502325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.502375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.502584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.502634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.502849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.502900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.503130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.503179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.503471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.503533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.503755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.503788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.503973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.504028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.504272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.504322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.504549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.504599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.504788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.504820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.505041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.505092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.505278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.505328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.505570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.505621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.505809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.505842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.506015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.506049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.201 [2024-07-13 22:20:36.506264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.201 [2024-07-13 22:20:36.506314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.201 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.506612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.506679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.506846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.506889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.507157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.507208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.507426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.507478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.507737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.507792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.508000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.508034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.508276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.508326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.508533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.508582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.508767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.508804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.509003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.509054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.509288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.509338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.509544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.509594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.509755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.509789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.510007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.510058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.510295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.510346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.510550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.510601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.510806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.510840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.511092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.511144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.511333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.511383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.511590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.511640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.511807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.511839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.512008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.512043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.512328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.512379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.512621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.512672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.512842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.512884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.513181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.513238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.513472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.513532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.513719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.513753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.513939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.513991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.514176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.514228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.514411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.514461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.514649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.514683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.514888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.514922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.515109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.515161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.202 qpair failed and we were unable to recover it. 00:37:17.202 [2024-07-13 22:20:36.515372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.202 [2024-07-13 22:20:36.515424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.515671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.515726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.515957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.515993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.516150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.516183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.516430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.516467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.516842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.516914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.517125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.517174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.517380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.517416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.517651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.517700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.518003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.518035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.518342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.518410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.518625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.518661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.518875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.518926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.519146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.519179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.519374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.519416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.519654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.519703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.519918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.519951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.520145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.520177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.520426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.520475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.520713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.520748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.520939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.520971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.521305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.521367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.521635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.521692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.521880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.521931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.522117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.522167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.522537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.522594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.522839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.522882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.523096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.523129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.523321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.523357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.523599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.523656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.523890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.523939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.524098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.524131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.524339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.524388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.524625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.524661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.524863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.524921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.525092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.525124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.525312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.525348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.525616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.525651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.525863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.525905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.526135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.526191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.526396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.203 [2024-07-13 22:20:36.526432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.203 qpair failed and we were unable to recover it. 00:37:17.203 [2024-07-13 22:20:36.526714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.526750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.526948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.526981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.527170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.527202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.527400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.527432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.527758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.527825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.528046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.528078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.528295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.528331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.528534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.528570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.528835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.528878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.529087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.529119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.529283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.529315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.529504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.529540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.529796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.529832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.530044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.530081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.530276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.530308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.530486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.530522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.530731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.530769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.530995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.531028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.531244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.531281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.531490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.531526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.531816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.531852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.532062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.532095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.532343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.532376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.532621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.532657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.532960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.532993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.533274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.533332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.533569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.533605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.533795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.533830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.534019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.534052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.534351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.534410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.534616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.534652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.534908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.534953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.535118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.535150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.535363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.535399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.535566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.535602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.535811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.535847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.536042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.536075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.536244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.536279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.536480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.536516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.536694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.536730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.204 [2024-07-13 22:20:36.536966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.204 [2024-07-13 22:20:36.536999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.204 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.537187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.537219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.537406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.537439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.537643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.537679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.537857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.537894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.538126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.538162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.538396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.538428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.538610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.538646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.538850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.538895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.539273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.539340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.539540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.539576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.539783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.539819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.540015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.540048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.540351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.540419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.540634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.540666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.540898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.540935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.541151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.541184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.541463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.541521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.541734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.541770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.541979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.542016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.542256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.542288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.542670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.542737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.542949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.542986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.543218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.543255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.543458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.543490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.543691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.543747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.543996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.544029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.544251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.544300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.544521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.544554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.544772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.544809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.545013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.545050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.545291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.545337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.545548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.545581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.545789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.545825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.546042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.546092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.546342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.546385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.546584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.546618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.546829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.546881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.547102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.547139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.547370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.547407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.205 [2024-07-13 22:20:36.547591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.205 [2024-07-13 22:20:36.547624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.205 qpair failed and we were unable to recover it. 00:37:17.206 [2024-07-13 22:20:36.547818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.206 [2024-07-13 22:20:36.547856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.206 qpair failed and we were unable to recover it. 00:37:17.206 [2024-07-13 22:20:36.548093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.206 [2024-07-13 22:20:36.548130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.206 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.548343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.548381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.548593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.548626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.548812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.548856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.549072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.549105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.549326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.549362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.549557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.549590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.549799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.549837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.550058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.550098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.550277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.550314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.550506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.550552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.550765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.550827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.551050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.551086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.551332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.551381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.551632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.551666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.551900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.551936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.552133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.552169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.552401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.552438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.552669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.552703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.552887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.552937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.553123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.553159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.553362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.553397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.553606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.553639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.553847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.553888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.554124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.554157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.554377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.554414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.554628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.554660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.554815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.554858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.555057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.486 [2024-07-13 22:20:36.555093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.486 qpair failed and we were unable to recover it. 00:37:17.486 [2024-07-13 22:20:36.555271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.555306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.555507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.555539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.555731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.555768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.555951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.555988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.556163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.556198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.556440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.556472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.556851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.556894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.557082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.557118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.557325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.557361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.557566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.557599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.557810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.557845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.558087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.558123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.558305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.558341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.558575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.558607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.558821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.558857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.559067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.559103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.559339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.559375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.559580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.559612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.559777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.559810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.560024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.560061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.560268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.560306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.560484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.560516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.560737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.560776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.560998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.561034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.561273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.561305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.561528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.561561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.561798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.561833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.562025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.562058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.562264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.562300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.562485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.562517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.562724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.562757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.562920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.562953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.563170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.563207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.563452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.563484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.563697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.563732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.563936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.563973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.564213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.564246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.564465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.564496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.564722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.487 [2024-07-13 22:20:36.564758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.487 qpair failed and we were unable to recover it. 00:37:17.487 [2024-07-13 22:20:36.564942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.564979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.565177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.565213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.565395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.565427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.565650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.565682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.565843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.565881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.566041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.566074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.566262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.566294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.566485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.566518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.566718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.566754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.566985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.567032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.567252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.567288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.567475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.567511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.567748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.567783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.567991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.568028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.568264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.568296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.568611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.568670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.568882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.568919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.569107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.569142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.569345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.569377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.569561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.569596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.569776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.569812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.570009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.570042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.570257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.570289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.570500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.570535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.570740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.570776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.570999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.571031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.571231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.571264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 53352 Killed "${NVMF_APP[@]}" "$@" 00:37:17.488 [2024-07-13 22:20:36.571462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.571495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.571708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.571740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 22:20:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:37:17.488 22:20:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:37:17.488 [2024-07-13 22:20:36.571938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.571972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 22:20:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:17.488 [2024-07-13 22:20:36.572163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 22:20:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:17.488 [2024-07-13 22:20:36.572197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 22:20:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:17.488 [2024-07-13 22:20:36.572363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.572396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.488 [2024-07-13 22:20:36.572601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.488 [2024-07-13 22:20:36.572633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.488 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.572826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.572859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.573055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.573092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.573356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.573411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.573639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.573684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.573893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.573926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.574109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.574150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.574426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.574462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.574647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.574685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.574887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.574924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.575170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.575202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.575388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.575420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.575630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.575662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.575856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.575902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 22:20:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=54031 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 22:20:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:37:17.489 [2024-07-13 22:20:36.576094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 22:20:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 54031 00:37:17.489 [2024-07-13 22:20:36.576127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.576329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 22:20:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 54031 ']' 00:37:17.489 [2024-07-13 22:20:36.576361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 22:20:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:17.489 [2024-07-13 22:20:36.576560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.576593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 22:20:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:17.489 [2024-07-13 22:20:36.576782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 22:20:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:17.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:17.489 [2024-07-13 22:20:36.576815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 22:20:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:17.489 [2024-07-13 22:20:36.577024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 22:20:36 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:17.489 [2024-07-13 22:20:36.577057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.577232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.577270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.577458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.577491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.577650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.577706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.577880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.577913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.578084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.578116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.578414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.578450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.578686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.578722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.578973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.579006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.579219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.579255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.579541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.579576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.579797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.579829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.580003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.580036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.580232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.580264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.580448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.489 [2024-07-13 22:20:36.580480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.489 qpair failed and we were unable to recover it. 00:37:17.489 [2024-07-13 22:20:36.580672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.580708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.580941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.580974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.581190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.581226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.581426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.581462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.581676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.581713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.581940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.581974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.582158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.582191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.582380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.582412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.582620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.582663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.582857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.582895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.583055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.583087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.583281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.583315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.583466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.583498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.583686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.583718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.583957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.584010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.584229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.584265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.584440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.584475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.584685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.584717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.584925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.584985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.585201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.585232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.585448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.585484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.585671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.585703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.585959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.585992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.586172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.586207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.586429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.586462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.586652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.586684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.586901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.586938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.587134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.587173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.587381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.587413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.587576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.587610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.587820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.587857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.588067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.588099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.588308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.490 [2024-07-13 22:20:36.588343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.490 qpair failed and we were unable to recover it. 00:37:17.490 [2024-07-13 22:20:36.588533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.588565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.588743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.588780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.589001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.589038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.589265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.589301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.589492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.589524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.589713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.589745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.589913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.589946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.590107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.590156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.590367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.590399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.590633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.590669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.590874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.590910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.591111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.591146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.591389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.591422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.591626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.591662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.591881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.591917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.592115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.592160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.592367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.592398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.592606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.592641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.592859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.592901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.593105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.593148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.593354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.593386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.593547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.593579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.593769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.593801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.594049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.594086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.594308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.594342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.594530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.594577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.594796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.594843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.595084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.595132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.595358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.595406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.595630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.595664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.595843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.595889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.596100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.596154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.596379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.596426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.596629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.596676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.596916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.596952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.597159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.491 [2024-07-13 22:20:36.597193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.491 qpair failed and we were unable to recover it. 00:37:17.491 [2024-07-13 22:20:36.597354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.597401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.597603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.597651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.597889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.597936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.598170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.598216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.598436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.598469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.598657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.598703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.598903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.598951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.599156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.599204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.599503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.599550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.599799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.599846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.600082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.600129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.600370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.600406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.600640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.600673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.600841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.600884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.601168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.601214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.601445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.601495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.601716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.601780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.602022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.602070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.602247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.602281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.602451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.602484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.602755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.602803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.603028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.603076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.603301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.603347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.603564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.603599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.603784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.603816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.604106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.604159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.604384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.604432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.604627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.604675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.604914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.604950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.605154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.605193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.605386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.605433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.605658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.605705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.605906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.605954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.606159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.606208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.606423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.606458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.492 qpair failed and we were unable to recover it. 00:37:17.492 [2024-07-13 22:20:36.606625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.492 [2024-07-13 22:20:36.606658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.606884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.606933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.607130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.607178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.607376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.607422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.607643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.607678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.607889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.607923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.608116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.608163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.608394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.608442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.608642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.608689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.608933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.608983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.609196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.609246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.609457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.609492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.609656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.609702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.609900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.609935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.610103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.610138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.610335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.610369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.610532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.610567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.610760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.610794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.610988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.611037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.611236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.611271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.611458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.611493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.611703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.611763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.611984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.612035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.612341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.612390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.612592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.612628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.612909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.612944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.613123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.613170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.613388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.613435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.613637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.613684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.613891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.613927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.614096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.614129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.614322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.614355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.614548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.614596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.614785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.614831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.615069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.615122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.615357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.615393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.615556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.615588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.493 [2024-07-13 22:20:36.615784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.493 [2024-07-13 22:20:36.615827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.493 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.616035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.616081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.616304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.616352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.616553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.616600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.616842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.616883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.617049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.617082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.617245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.617278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.617466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.617525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.617742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.617789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.618026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.618075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.618273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.618317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.618527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.618560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.618748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.618780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.618955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.619003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.619195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.619243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.619449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.619497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.619717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.619764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.619961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.619996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.620168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.620200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.620367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.620401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.620586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.620631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.620818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.620872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.621079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.621126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.621351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.621386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.621560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.621593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.621846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.621908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.622159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.622206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.622428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.622475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.622693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.622728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.622925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.622960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.623130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.623164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.623359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.623405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.623627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.623675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.623896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.623943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.624160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.624194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.624408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.624441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.624630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.624677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.624934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.624987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.625184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.625231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.625425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.494 [2024-07-13 22:20:36.625460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.494 qpair failed and we were unable to recover it. 00:37:17.494 [2024-07-13 22:20:36.625622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.625655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.625836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.625885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.626070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.626117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.626339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.626386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.626634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.626681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.626896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.626931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.627096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.627129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.627300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.627345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.627560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.627607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.627824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.627879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.628103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.628138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.628307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.628340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.628530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.628576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.628770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.628816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.629023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.629070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.629288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.629334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.629518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.629552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.629741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.629773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.629992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.630039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.630260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.630307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.630551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.630596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.630789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.630823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.630996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.631029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.631189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.631235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.631467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.631517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.631726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.631764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.631948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.631984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.632181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.632216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.632401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.632450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.632670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.632716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.632911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.632958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.633146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.633194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.633405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.633438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.495 [2024-07-13 22:20:36.633606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.495 [2024-07-13 22:20:36.633638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.495 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.633805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.633852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.634069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.634127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.634331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.634367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.634525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.634565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.634774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.634809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.635057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.635094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.635369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.635404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.635596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.635631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.635848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.635891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.636064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.636098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.636297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.636331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.636547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.636583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.636822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.636857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.637034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.637069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.637262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.637297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.637468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.637505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.637737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.637785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.638030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.638081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.638260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.638295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.638451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.638485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.638684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.638733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.638942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.638992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.639180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.639220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.639425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.639459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.639645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.639679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.639842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.639883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.640070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.640103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.640262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.640295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.640503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.640537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.640706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.640739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.640929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.640976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.641184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.641220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.641416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.641468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.641690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.641739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.641960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.641995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.642164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.642196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.642391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.642424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.496 qpair failed and we were unable to recover it. 00:37:17.496 [2024-07-13 22:20:36.642605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.496 [2024-07-13 22:20:36.642637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.642800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.642833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.643029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.643062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.643246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.643278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.643495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.643528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.643693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.643726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.643940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.643978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.644191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.644224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.644421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.644454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.644676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.644708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.644931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.644964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.645117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.645150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.645363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.645395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.645581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.645613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.645765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.645797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.645960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.645993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.646182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.646214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.646405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.646437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.646597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.646629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.646793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.646826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.647024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.647057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.647255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.647287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.647469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.647501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.647694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.647726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.647911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.647945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.648112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.648159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.648369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.648402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.648597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.648630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.648788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.648820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.648986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.649019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.649178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.649210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.649423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.649456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.649647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.649679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.649889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.649951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.650139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.650183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.650396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.650447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.650647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.497 [2024-07-13 22:20:36.650699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.497 qpair failed and we were unable to recover it. 00:37:17.497 [2024-07-13 22:20:36.650938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.650972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.651161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.651194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.651381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.651413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.651601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.651634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.651794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.651827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.651990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.652022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.652203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.652236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.652400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.652433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.652590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.652622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.652809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.652845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.653036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.653068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.653225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.653257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.653437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.653469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.653625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.653657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.653840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.653880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.654041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.654073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.654244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.654277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.654465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.654497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.654653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.654685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.654880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.654924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.655117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.655150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.655333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.655366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.655559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.655592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.655757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.655789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.655972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.656005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.656169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.656201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.656393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.656426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.656613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.656645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.656800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.656832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.657020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.657052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.657216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.657196] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:17.498 [2024-07-13 22:20:36.657249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.657316] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:17.498 [2024-07-13 22:20:36.657415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.657447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.657632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.657663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.657839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.657881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.658098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.498 [2024-07-13 22:20:36.658130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.498 qpair failed and we were unable to recover it. 00:37:17.498 [2024-07-13 22:20:36.658304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.658337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.658499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.658532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.658728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.658761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.658919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.658952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.659141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.659173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.659339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.659372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.659561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.659594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.659758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.659790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.659959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.659992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.660176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.660208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.660389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.660421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.660634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.660667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.660852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.660891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.661053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.661089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.661276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.661309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.661492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.661525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.661735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.661768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.661959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.661992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.662180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.662212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.662370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.662403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.662591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.662625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.662782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.662815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.663031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.663064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.663268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.663301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.663457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.663490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.663650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.663683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.663876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.663910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.664072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.664105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.664267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.664299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.664485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.664518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.664727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.664760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.664923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.664956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.665174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.665206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.665371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.665406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.665598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.665630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.665845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.665893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.666092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.499 [2024-07-13 22:20:36.666124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.499 qpair failed and we were unable to recover it. 00:37:17.499 [2024-07-13 22:20:36.666316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.666348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.666530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.666562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.666710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.666742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.666938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.666971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.667184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.667216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.667382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.667416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.667580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.667612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.667792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.667824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.667998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.668031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.668217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.668250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.668465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.668498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.668697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.668738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.668930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.668963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.669152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.669185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.669369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.669402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.669613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.669645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.669800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.669838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.670032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.670064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.670225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.670257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.670443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.670475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.670663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.670695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.670885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.670918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.671103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.671135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.671292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.671324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.671515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.671547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.671729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.671761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.671951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.671983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.672172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.672204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.672389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.672421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.500 [2024-07-13 22:20:36.672611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.500 [2024-07-13 22:20:36.672644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.500 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.672797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.672830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.672995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.673029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.673244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.673276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.673425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.673457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.673673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.673705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.673852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.673898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.674110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.674142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.674322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.674354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.674543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.674576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.674751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.674783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.674945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.674978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.675176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.675208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.675393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.675425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.675607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.675640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.675796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.675829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.676019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.676051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.676241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.676273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.676453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.676485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.676670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.676702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.676922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.676954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.677138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.677171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.677320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.677352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.677537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.677569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.677754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.677788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.677949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.677982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.678172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.678204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.678390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.678426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.678585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.678618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.678774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.678806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.679017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.679050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.679216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.679248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.679439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.679471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.679659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.679691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.679882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.679916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.680076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.680109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.680291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.680323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.680536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.680569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.680729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.680761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.501 qpair failed and we were unable to recover it. 00:37:17.501 [2024-07-13 22:20:36.680947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.501 [2024-07-13 22:20:36.680979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.681166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.681198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.681416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.681448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.681606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.681638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.681805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.681837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.682035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.682068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.682229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.682263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.682448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.682491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.682679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.682711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.682897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.682930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.683140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.683172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.683381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.683413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.683624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.683657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.683834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.683872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.684042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.684074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.684240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.684273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.684460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.684492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.684653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.684685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.684836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.684874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.685037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.685070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.685282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.685314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.685511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.685545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.685705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.685737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.685906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.685938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.686090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.686123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.686313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.686345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.686510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.686541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.686727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.686759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.686976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.687012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.687204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.687236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.687399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.687431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.687586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.687618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.687803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.687835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.502 [2024-07-13 22:20:36.688023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.502 [2024-07-13 22:20:36.688056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.502 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.688226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.688258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.688421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.688454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.688636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.688668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.688837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.688881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.689066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.689098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.689302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.689334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.689498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.689530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.689717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.689749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.689914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.689948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.690165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.690197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.690427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.690459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.690673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.690706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.690896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.690929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.691095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.691128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.691352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.691384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.691573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.691605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.691792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.691824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.692002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.692035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.692194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.692226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.692412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.692444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.692628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.692661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.692863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.692923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.693103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.693138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.693315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.693351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.693521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.693554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.693721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.693754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.693964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.693998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.694162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.694195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.503 qpair failed and we were unable to recover it. 00:37:17.503 [2024-07-13 22:20:36.694362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.503 [2024-07-13 22:20:36.694396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.694590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.504 [2024-07-13 22:20:36.694623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.694804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.504 [2024-07-13 22:20:36.694837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.695026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.504 [2024-07-13 22:20:36.695060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.695272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.504 [2024-07-13 22:20:36.695305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.695494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.504 [2024-07-13 22:20:36.695527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.695717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.504 [2024-07-13 22:20:36.695756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.695922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.504 [2024-07-13 22:20:36.695956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.696155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.504 [2024-07-13 22:20:36.696188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.696380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.504 [2024-07-13 22:20:36.696414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.696566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.504 [2024-07-13 22:20:36.696599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.696756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.504 [2024-07-13 22:20:36.696790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.696990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.504 [2024-07-13 22:20:36.697023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.697202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.504 [2024-07-13 22:20:36.697235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.697448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.504 [2024-07-13 22:20:36.697481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.697663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.504 [2024-07-13 22:20:36.697696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.697917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.504 [2024-07-13 22:20:36.697951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.698113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.504 [2024-07-13 22:20:36.698158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.698346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.504 [2024-07-13 22:20:36.698380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.698595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.504 [2024-07-13 22:20:36.698629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.698862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.504 [2024-07-13 22:20:36.698901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.699070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.504 [2024-07-13 22:20:36.699103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.699289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.504 [2024-07-13 22:20:36.699322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.699509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.504 [2024-07-13 22:20:36.699542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.699734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.504 [2024-07-13 22:20:36.699767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.699976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.504 [2024-07-13 22:20:36.700014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.700202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.504 [2024-07-13 22:20:36.700235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.700432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.504 [2024-07-13 22:20:36.700465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.504 qpair failed and we were unable to recover it. 00:37:17.504 [2024-07-13 22:20:36.700622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.505 [2024-07-13 22:20:36.700656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.505 qpair failed and we were unable to recover it. 00:37:17.505 [2024-07-13 22:20:36.700856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.505 [2024-07-13 22:20:36.700895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.505 qpair failed and we were unable to recover it. 00:37:17.505 [2024-07-13 22:20:36.701083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.505 [2024-07-13 22:20:36.701116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.505 qpair failed and we were unable to recover it. 00:37:17.505 [2024-07-13 22:20:36.701283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.505 [2024-07-13 22:20:36.701316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.505 qpair failed and we were unable to recover it. 00:37:17.505 [2024-07-13 22:20:36.701510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.505 [2024-07-13 22:20:36.701543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.505 qpair failed and we were unable to recover it. 00:37:17.505 [2024-07-13 22:20:36.701736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.505 [2024-07-13 22:20:36.701769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.505 qpair failed and we were unable to recover it. 00:37:17.505 [2024-07-13 22:20:36.701952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.505 [2024-07-13 22:20:36.701986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.505 qpair failed and we were unable to recover it. 00:37:17.505 [2024-07-13 22:20:36.702186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.505 [2024-07-13 22:20:36.702221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.505 qpair failed and we were unable to recover it. 00:37:17.505 [2024-07-13 22:20:36.702434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.505 [2024-07-13 22:20:36.702477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.505 qpair failed and we were unable to recover it. 00:37:17.505 [2024-07-13 22:20:36.702673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.505 [2024-07-13 22:20:36.702706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.505 qpair failed and we were unable to recover it. 00:37:17.505 [2024-07-13 22:20:36.702873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.505 [2024-07-13 22:20:36.702907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.505 qpair failed and we were unable to recover it. 00:37:17.505 [2024-07-13 22:20:36.703103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.505 [2024-07-13 22:20:36.703136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.505 qpair failed and we were unable to recover it. 00:37:17.505 [2024-07-13 22:20:36.703296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.505 [2024-07-13 22:20:36.703329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.505 qpair failed and we were unable to recover it. 00:37:17.505 [2024-07-13 22:20:36.703516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.505 [2024-07-13 22:20:36.703549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.505 qpair failed and we were unable to recover it. 00:37:17.505 [2024-07-13 22:20:36.703741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.505 [2024-07-13 22:20:36.703774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.505 qpair failed and we were unable to recover it. 00:37:17.505 [2024-07-13 22:20:36.703956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.505 [2024-07-13 22:20:36.703989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.505 qpair failed and we were unable to recover it. 00:37:17.505 [2024-07-13 22:20:36.704185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.505 [2024-07-13 22:20:36.704218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.505 qpair failed and we were unable to recover it. 00:37:17.505 [2024-07-13 22:20:36.704399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.505 [2024-07-13 22:20:36.704432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.505 qpair failed and we were unable to recover it. 00:37:17.505 [2024-07-13 22:20:36.704637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.505 [2024-07-13 22:20:36.704675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.505 qpair failed and we were unable to recover it. 00:37:17.505 [2024-07-13 22:20:36.704837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.505 [2024-07-13 22:20:36.704886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.505 qpair failed and we were unable to recover it. 00:37:17.505 [2024-07-13 22:20:36.705078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.705110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.705304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.705337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.705500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.705533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.705715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.705748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.705938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.705972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.706157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.706191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.706381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.706413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.706603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.706636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.706815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.706860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.707056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.707090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.707290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.707324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.707508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.707541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.707705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.707739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.707949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.707983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.708142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.708175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.708366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.708400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.708566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.708599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.708814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.708847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.709048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.709081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.709264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.709297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.709490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.709523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.709684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.709717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.709905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.709939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.710104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.710139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.710344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.710377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.710567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.710601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.710783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.710816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.711011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.711044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.506 [2024-07-13 22:20:36.711205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.506 [2024-07-13 22:20:36.711238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.506 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.711429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.711462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.711647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.711680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.711877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.711911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.712100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.712132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.712326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.712359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.712526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.712559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.712766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.712799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.712992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.713025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.713217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.713251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.713464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.713501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.713686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.713719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.713913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.713947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.714105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.714138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.714350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.714383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.714549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.714581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.714761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.714793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.714955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.714989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.715173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.715206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.715367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.715399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.715561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.715594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.715803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.715836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.715998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.716031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.716230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.716264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.716458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.716522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.716718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.716751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.716915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.716948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.717115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.717147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.717304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.507 [2024-07-13 22:20:36.717338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.507 qpair failed and we were unable to recover it. 00:37:17.507 [2024-07-13 22:20:36.717551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.717584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.717774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.717807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.718003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.718036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.718215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.718247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.718417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.718449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.718634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.718667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.718890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.718923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.719105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.719138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.719300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.719333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.719493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.719527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.719695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.719727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.719923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.719956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.720149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.720182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.720393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.720427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.720613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.720646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.720812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.720845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.721037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.721069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.721253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.721286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.721449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.721482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.721645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.721678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.721861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.721911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.722122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.722160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.722321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.722353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.722522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.722555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.722742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.722775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.722964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.722998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.723198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.723231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.723413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.508 [2024-07-13 22:20:36.723445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.508 qpair failed and we were unable to recover it. 00:37:17.508 [2024-07-13 22:20:36.723628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.723659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.723849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.723887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.724047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.724081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.724242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.724275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.724446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.724479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.724693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.724726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.724940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.724972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.725160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.725192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.725362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.725395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.725558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.725593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.725757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.725790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.726004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.726038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.726199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.726231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.726422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.726456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.726665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.726697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.726884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.726918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.727080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.727113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.727302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.727336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.727523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.727557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.727742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.727775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.727960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.727997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.728167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.728200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.728385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.728418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.728588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.728622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.728814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.728847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.729042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.729073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.729266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.729298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.729485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.729518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.509 [2024-07-13 22:20:36.729686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.509 [2024-07-13 22:20:36.729718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.509 qpair failed and we were unable to recover it. 00:37:17.510 [2024-07-13 22:20:36.729900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.729934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.510 qpair failed and we were unable to recover it. 00:37:17.510 [2024-07-13 22:20:36.730103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.730135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.510 qpair failed and we were unable to recover it. 00:37:17.510 [2024-07-13 22:20:36.730321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.730364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.510 qpair failed and we were unable to recover it. 00:37:17.510 [2024-07-13 22:20:36.730554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.730586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.510 qpair failed and we were unable to recover it. 00:37:17.510 [2024-07-13 22:20:36.730806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.730839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.510 qpair failed and we were unable to recover it. 00:37:17.510 [2024-07-13 22:20:36.731024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.731057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.510 qpair failed and we were unable to recover it. 00:37:17.510 [2024-07-13 22:20:36.731271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.731304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.510 qpair failed and we were unable to recover it. 00:37:17.510 [2024-07-13 22:20:36.731491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.731523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.510 qpair failed and we were unable to recover it. 00:37:17.510 [2024-07-13 22:20:36.731712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.731745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.510 qpair failed and we were unable to recover it. 00:37:17.510 [2024-07-13 22:20:36.731930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.731962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.510 qpair failed and we were unable to recover it. 00:37:17.510 [2024-07-13 22:20:36.732178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.732210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.510 qpair failed and we were unable to recover it. 00:37:17.510 [2024-07-13 22:20:36.732423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.732454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.510 qpair failed and we were unable to recover it. 00:37:17.510 [2024-07-13 22:20:36.732664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.732696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.510 qpair failed and we were unable to recover it. 00:37:17.510 [2024-07-13 22:20:36.732877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.732909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.510 qpair failed and we were unable to recover it. 00:37:17.510 [2024-07-13 22:20:36.733092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.733124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.510 qpair failed and we were unable to recover it. 00:37:17.510 [2024-07-13 22:20:36.733332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.733364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.510 qpair failed and we were unable to recover it. 00:37:17.510 [2024-07-13 22:20:36.733551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.733584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.510 qpair failed and we were unable to recover it. 00:37:17.510 [2024-07-13 22:20:36.733767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.733800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.510 qpair failed and we were unable to recover it. 00:37:17.510 [2024-07-13 22:20:36.734017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.734050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.510 qpair failed and we were unable to recover it. 00:37:17.510 [2024-07-13 22:20:36.734233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.734265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.510 qpair failed and we were unable to recover it. 00:37:17.510 [2024-07-13 22:20:36.734422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.734454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.510 qpair failed and we were unable to recover it. 00:37:17.510 [2024-07-13 22:20:36.734613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.734645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.510 qpair failed and we were unable to recover it. 00:37:17.510 [2024-07-13 22:20:36.734840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.734891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.510 qpair failed and we were unable to recover it. 00:37:17.510 [2024-07-13 22:20:36.735081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.735114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.510 qpair failed and we were unable to recover it. 00:37:17.510 EAL: No free 2048 kB hugepages reported on node 1 00:37:17.510 [2024-07-13 22:20:36.735278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.735311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.510 qpair failed and we were unable to recover it. 00:37:17.510 [2024-07-13 22:20:36.735523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.735556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.510 qpair failed and we were unable to recover it. 00:37:17.510 [2024-07-13 22:20:36.735739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.735772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.510 qpair failed and we were unable to recover it. 00:37:17.510 [2024-07-13 22:20:36.735941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.510 [2024-07-13 22:20:36.735975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.511 qpair failed and we were unable to recover it. 00:37:17.511 [2024-07-13 22:20:36.736161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.511 [2024-07-13 22:20:36.736194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.511 qpair failed and we were unable to recover it. 00:37:17.511 [2024-07-13 22:20:36.736379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.511 [2024-07-13 22:20:36.736412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.511 qpair failed and we were unable to recover it. 00:37:17.511 [2024-07-13 22:20:36.736627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.511 [2024-07-13 22:20:36.736660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.511 qpair failed and we were unable to recover it. 00:37:17.511 [2024-07-13 22:20:36.736874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.511 [2024-07-13 22:20:36.736907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.511 qpair failed and we were unable to recover it. 00:37:17.511 [2024-07-13 22:20:36.737097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.511 [2024-07-13 22:20:36.737129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.511 qpair failed and we were unable to recover it. 00:37:17.511 [2024-07-13 22:20:36.737291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.511 [2024-07-13 22:20:36.737325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.511 qpair failed and we were unable to recover it. 00:37:17.511 [2024-07-13 22:20:36.737480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.511 [2024-07-13 22:20:36.737513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.511 qpair failed and we were unable to recover it. 00:37:17.511 [2024-07-13 22:20:36.737682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.511 [2024-07-13 22:20:36.737714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.511 qpair failed and we were unable to recover it. 00:37:17.511 [2024-07-13 22:20:36.737872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.511 [2024-07-13 22:20:36.737907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.511 qpair failed and we were unable to recover it. 00:37:17.511 [2024-07-13 22:20:36.738065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.511 [2024-07-13 22:20:36.738097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.511 qpair failed and we were unable to recover it. 00:37:17.511 [2024-07-13 22:20:36.738305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.511 [2024-07-13 22:20:36.738337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.511 qpair failed and we were unable to recover it. 00:37:17.511 [2024-07-13 22:20:36.738524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.511 [2024-07-13 22:20:36.738557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.511 qpair failed and we were unable to recover it. 00:37:17.511 [2024-07-13 22:20:36.738772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.511 [2024-07-13 22:20:36.738805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.511 qpair failed and we were unable to recover it. 00:37:17.511 [2024-07-13 22:20:36.738971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.511 [2024-07-13 22:20:36.739004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.511 qpair failed and we were unable to recover it. 00:37:17.511 [2024-07-13 22:20:36.739195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.511 [2024-07-13 22:20:36.739229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.511 qpair failed and we were unable to recover it. 00:37:17.511 [2024-07-13 22:20:36.739389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.511 [2024-07-13 22:20:36.739422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.511 qpair failed and we were unable to recover it. 00:37:17.511 [2024-07-13 22:20:36.739611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.511 [2024-07-13 22:20:36.739648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.511 qpair failed and we were unable to recover it. 00:37:17.511 [2024-07-13 22:20:36.739842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.511 [2024-07-13 22:20:36.739882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.511 qpair failed and we were unable to recover it. 00:37:17.511 [2024-07-13 22:20:36.740059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.511 [2024-07-13 22:20:36.740092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.511 qpair failed and we were unable to recover it. 00:37:17.511 [2024-07-13 22:20:36.740251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.511 [2024-07-13 22:20:36.740283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.511 qpair failed and we were unable to recover it. 00:37:17.511 [2024-07-13 22:20:36.740494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.511 [2024-07-13 22:20:36.740527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.511 qpair failed and we were unable to recover it. 00:37:17.511 [2024-07-13 22:20:36.740688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.511 [2024-07-13 22:20:36.740720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.511 qpair failed and we were unable to recover it. 00:37:17.511 [2024-07-13 22:20:36.740929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.511 [2024-07-13 22:20:36.740962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.512 qpair failed and we were unable to recover it. 00:37:17.512 [2024-07-13 22:20:36.741173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.512 [2024-07-13 22:20:36.741205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.512 qpair failed and we were unable to recover it. 00:37:17.512 [2024-07-13 22:20:36.741371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.512 [2024-07-13 22:20:36.741403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.512 qpair failed and we were unable to recover it. 00:37:17.512 [2024-07-13 22:20:36.741561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.512 [2024-07-13 22:20:36.741594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.512 qpair failed and we were unable to recover it. 00:37:17.512 [2024-07-13 22:20:36.741795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.512 [2024-07-13 22:20:36.741828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.512 qpair failed and we were unable to recover it. 00:37:17.512 [2024-07-13 22:20:36.741997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.512 [2024-07-13 22:20:36.742030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.512 qpair failed and we were unable to recover it. 00:37:17.512 [2024-07-13 22:20:36.742236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.512 [2024-07-13 22:20:36.742267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.512 qpair failed and we were unable to recover it. 00:37:17.512 [2024-07-13 22:20:36.742451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.512 [2024-07-13 22:20:36.742483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.512 qpair failed and we were unable to recover it. 00:37:17.512 [2024-07-13 22:20:36.742672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.512 [2024-07-13 22:20:36.742704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.512 qpair failed and we were unable to recover it. 00:37:17.512 [2024-07-13 22:20:36.742862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.512 [2024-07-13 22:20:36.742901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.512 qpair failed and we were unable to recover it. 00:37:17.512 [2024-07-13 22:20:36.743084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.512 [2024-07-13 22:20:36.743118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.512 qpair failed and we were unable to recover it. 00:37:17.512 [2024-07-13 22:20:36.743304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.512 [2024-07-13 22:20:36.743337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.512 qpair failed and we were unable to recover it. 00:37:17.512 [2024-07-13 22:20:36.743522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.512 [2024-07-13 22:20:36.743555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.512 qpair failed and we were unable to recover it. 00:37:17.512 [2024-07-13 22:20:36.743772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.512 [2024-07-13 22:20:36.743804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.512 qpair failed and we were unable to recover it. 00:37:17.512 [2024-07-13 22:20:36.743993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.512 [2024-07-13 22:20:36.744026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.512 qpair failed and we were unable to recover it. 00:37:17.512 [2024-07-13 22:20:36.744183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.512 [2024-07-13 22:20:36.744216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.512 qpair failed and we were unable to recover it. 00:37:17.512 [2024-07-13 22:20:36.744405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.512 [2024-07-13 22:20:36.744448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.512 qpair failed and we were unable to recover it. 00:37:17.512 [2024-07-13 22:20:36.744615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.512 [2024-07-13 22:20:36.744648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.512 qpair failed and we were unable to recover it. 00:37:17.512 [2024-07-13 22:20:36.744812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.512 [2024-07-13 22:20:36.744844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.512 qpair failed and we were unable to recover it. 00:37:17.512 [2024-07-13 22:20:36.745038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.512 [2024-07-13 22:20:36.745070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.512 qpair failed and we were unable to recover it. 00:37:17.512 [2024-07-13 22:20:36.745236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.512 [2024-07-13 22:20:36.745268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.512 qpair failed and we were unable to recover it. 00:37:17.512 [2024-07-13 22:20:36.745460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.512 [2024-07-13 22:20:36.745494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.512 qpair failed and we were unable to recover it. 00:37:17.512 [2024-07-13 22:20:36.745649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.512 [2024-07-13 22:20:36.745683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.512 qpair failed and we were unable to recover it. 00:37:17.512 [2024-07-13 22:20:36.745849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.512 [2024-07-13 22:20:36.745892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.512 qpair failed and we were unable to recover it. 00:37:17.512 [2024-07-13 22:20:36.746073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.512 [2024-07-13 22:20:36.746105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.512 qpair failed and we were unable to recover it. 00:37:17.512 [2024-07-13 22:20:36.746269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.512 [2024-07-13 22:20:36.746301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.512 qpair failed and we were unable to recover it. 00:37:17.512 [2024-07-13 22:20:36.746488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.512 [2024-07-13 22:20:36.746522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.512 qpair failed and we were unable to recover it. 00:37:17.512 [2024-07-13 22:20:36.746678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.512 [2024-07-13 22:20:36.746712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.746933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.746967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.747180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.747212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.747374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.747406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.747561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.747594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.747783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.747815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.748008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.748047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.748258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.748295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.748481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.748513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.748735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.748768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.748948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.748981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.749138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.749171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.749363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.749396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.749605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.749637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.749797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.749829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.750006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.750039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.750231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.750263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.750425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.750457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.750644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.750676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.750891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.750925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.751111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.751143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.751363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.751396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.751554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.751586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.751800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.751832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.752027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.752059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.752232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.752264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.752474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.752506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.752697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.752730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.513 qpair failed and we were unable to recover it. 00:37:17.513 [2024-07-13 22:20:36.752885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.513 [2024-07-13 22:20:36.752917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.753083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.753115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.753339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.753372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.753536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.753568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.753729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.753762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.753977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.754010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.754211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.754244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.754428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.754461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.754651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.754683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.754846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.754886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.755049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.755082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.755299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.755332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.755495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.755528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.755708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.755740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.755936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.755968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.756127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.756159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.756345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.756377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.756534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.756567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.756730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.756762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.756949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.756986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.757153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.757185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.757362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.757395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.757579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.757611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.757801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.757833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.758017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.758051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.758218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.758260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.758416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.758448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.758631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.758665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.514 [2024-07-13 22:20:36.758831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.514 [2024-07-13 22:20:36.758863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.514 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.759054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.759087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.515 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.759281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.759312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.515 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.759474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.759506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.515 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.759686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.759718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.515 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.759946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.759979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.515 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.760148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.760180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.515 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.760391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.760423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.515 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.760583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.760615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.515 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.760777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.760809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.515 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.760981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.761013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.515 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.761199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.761233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.515 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.761446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.761480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.515 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.761643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.761676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.515 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.761833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.761880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.515 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.762064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.762095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.515 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.762287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.762319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.515 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.762485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.762518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.515 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.762682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.762715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.515 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.762938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.762971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.515 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.763134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.763170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.515 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.763356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.763389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.515 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.763573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.763606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.515 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.763810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.763842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.515 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.764042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.764075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.515 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.764249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.764282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.515 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.764448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.764480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.515 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.764691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.764723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.515 qpair failed and we were unable to recover it. 00:37:17.515 [2024-07-13 22:20:36.764917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.515 [2024-07-13 22:20:36.764950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.765138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.765172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.765357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.765390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.765576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.765614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.765803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.765835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.766034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.766066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.766258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.766290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.766483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.766515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.766674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.766706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.766917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.766950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.767139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.767181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.767394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.767427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.767614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.767647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.767804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.767835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.768011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.768044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.768240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.768272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.768457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.768489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.768659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.768692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.768860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.768901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.769059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.769091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.769290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.769323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.769485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.769518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.769711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.769745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.769941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.769974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.770158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.770190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.770375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.770407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.770574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.770606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.770792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.770825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.771012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.516 [2024-07-13 22:20:36.771046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.516 qpair failed and we were unable to recover it. 00:37:17.516 [2024-07-13 22:20:36.771230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.771263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.771426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.771459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.771642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.771673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.771871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.771904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.772060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.772103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.772302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.772334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.772501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.772533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.772695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.772726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.772894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.772926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.773113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.773146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.773307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.773342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.773528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.773561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.773769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.773802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.773971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.774003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.774191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.774227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.774455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.774487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.774683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.774715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.774878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.774911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.775124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.775156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.775318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.775350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.775535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.775566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.775775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.775806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.775989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.776023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.776206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.776238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.776403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.776436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.776626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.776657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.776873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.776906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.777096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.517 [2024-07-13 22:20:36.777129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.517 qpair failed and we were unable to recover it. 00:37:17.517 [2024-07-13 22:20:36.777295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.518 [2024-07-13 22:20:36.777327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.518 qpair failed and we were unable to recover it. 00:37:17.518 [2024-07-13 22:20:36.777546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.518 [2024-07-13 22:20:36.777578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.518 qpair failed and we were unable to recover it. 00:37:17.518 [2024-07-13 22:20:36.777765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.518 [2024-07-13 22:20:36.777797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.518 qpair failed and we were unable to recover it. 00:37:17.518 [2024-07-13 22:20:36.777984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.518 [2024-07-13 22:20:36.778016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.518 qpair failed and we were unable to recover it. 00:37:17.518 [2024-07-13 22:20:36.778208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.518 [2024-07-13 22:20:36.778239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.518 qpair failed and we were unable to recover it. 00:37:17.518 [2024-07-13 22:20:36.778426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.518 [2024-07-13 22:20:36.778459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.518 qpair failed and we were unable to recover it. 00:37:17.518 [2024-07-13 22:20:36.778623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.518 [2024-07-13 22:20:36.778655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.518 qpair failed and we were unable to recover it. 00:37:17.518 [2024-07-13 22:20:36.778843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.518 [2024-07-13 22:20:36.778886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.518 qpair failed and we were unable to recover it. 00:37:17.518 [2024-07-13 22:20:36.779073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.518 [2024-07-13 22:20:36.779106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.518 qpair failed and we were unable to recover it. 00:37:17.518 [2024-07-13 22:20:36.779266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.518 [2024-07-13 22:20:36.779297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.518 qpair failed and we were unable to recover it. 00:37:17.518 [2024-07-13 22:20:36.779462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.518 [2024-07-13 22:20:36.779494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.518 qpair failed and we were unable to recover it. 00:37:17.518 [2024-07-13 22:20:36.779679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.518 [2024-07-13 22:20:36.779711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.518 qpair failed and we were unable to recover it. 00:37:17.518 [2024-07-13 22:20:36.779900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.518 [2024-07-13 22:20:36.779932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.518 qpair failed and we were unable to recover it. 00:37:17.518 [2024-07-13 22:20:36.780131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.518 [2024-07-13 22:20:36.780180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.518 qpair failed and we were unable to recover it. 00:37:17.518 [2024-07-13 22:20:36.780402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.518 [2024-07-13 22:20:36.780437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.518 qpair failed and we were unable to recover it. 00:37:17.518 [2024-07-13 22:20:36.780594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.518 [2024-07-13 22:20:36.780628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.518 qpair failed and we were unable to recover it. 00:37:17.518 [2024-07-13 22:20:36.780788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.518 [2024-07-13 22:20:36.780821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.518 qpair failed and we were unable to recover it. 00:37:17.518 [2024-07-13 22:20:36.781014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.518 [2024-07-13 22:20:36.781047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.518 qpair failed and we were unable to recover it. 00:37:17.518 [2024-07-13 22:20:36.781233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.518 [2024-07-13 22:20:36.781266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.518 qpair failed and we were unable to recover it. 00:37:17.518 [2024-07-13 22:20:36.781422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.518 [2024-07-13 22:20:36.781455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.518 qpair failed and we were unable to recover it. 00:37:17.518 [2024-07-13 22:20:36.781615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.518 [2024-07-13 22:20:36.781648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.518 qpair failed and we were unable to recover it. 00:37:17.518 [2024-07-13 22:20:36.781827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.518 [2024-07-13 22:20:36.781860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.782058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.782090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.782273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.782305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.782515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.782548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.782735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.782767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.782928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.782962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.783153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.783185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.783364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.783396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.783580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.783626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.783782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.783815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.783983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.784015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.784192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.784224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.784378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.784410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.784562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.784595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.784756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.784788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.784979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.785012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.785175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.785208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.785364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.785397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.785554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.785586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.785748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.785780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.785935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.785969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.786153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.786186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.786395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.786427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.786611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.786644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.786824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.786857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.787043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.787075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.787266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.787300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.787484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.787516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.519 [2024-07-13 22:20:36.787701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.519 [2024-07-13 22:20:36.787734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.519 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.787923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.787957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.788120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.788152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.788301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.788333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.788539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.788576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.788738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.788770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.788940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.788973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.789158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.789190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.789374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.789406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.789593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.789625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.789840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.789877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.790064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.790096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.790260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.790293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.790449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.790481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.790667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.790699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.790860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.790898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.791051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.791083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.791246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.791279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.791450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.791482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.791652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.791684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.791892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.791925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.792074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.792106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.792283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.792315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.792511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.792543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.792757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.792789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.792969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.793002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.793180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.793212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.793394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.793426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.520 [2024-07-13 22:20:36.793597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.520 [2024-07-13 22:20:36.793631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.520 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.793825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.793858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.794019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.794051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.794227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.794259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.794442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.794475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.794639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.794671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.794857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.794895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.795094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.795126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.795306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.795339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.795486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.795518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.795683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.795716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.795897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.795931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.796124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.796157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.796317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.796349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.796537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.796569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.796757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.796789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.796979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.797026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.797215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.797248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.797443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.797475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.797684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.797717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.797900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.797932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.798087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.798119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.798306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.798338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.798498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.798532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.798698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.798730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.798919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.798952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.799142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.799175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.799330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.799363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.799516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.521 [2024-07-13 22:20:36.799549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.521 qpair failed and we were unable to recover it. 00:37:17.521 [2024-07-13 22:20:36.799715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.799747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.799907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.799940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.800128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.800160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.800327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.800360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.800549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.800581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.800739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.800772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.800933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.800966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.801177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.801210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.801392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.801425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.801609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.801641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.801844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.801882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.802069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.802101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.802286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.802319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.802403] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:17.522 [2024-07-13 22:20:36.802485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.802517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.802686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.802719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.802908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.802941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.803097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.803129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.803318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.803351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.803516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.803548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.803702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.803735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.803916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.803949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.804110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.804143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.804310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.804342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.804554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.804586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.804746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.804779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.804972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.805005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.805195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.805229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.805398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.522 [2024-07-13 22:20:36.805431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.522 qpair failed and we were unable to recover it. 00:37:17.522 [2024-07-13 22:20:36.805599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.805632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.805793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.805825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.805995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.806027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.806196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.806228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.806384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.806417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.806574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.806606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.806762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.806795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.806983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.807017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.807185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.807219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.807380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.807412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.807594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.807626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.807790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.807823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.808019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.808057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.808247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.808280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.808442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.808476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.808629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.808662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.808824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.808857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.809018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.809051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.809240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.809273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.809430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.809463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.809650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.809683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.809848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.809887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.810054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.810087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.810246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.810290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.810452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.810485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.810675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.810708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.810879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.810912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.811065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.811098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.523 qpair failed and we were unable to recover it. 00:37:17.523 [2024-07-13 22:20:36.811293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.523 [2024-07-13 22:20:36.811325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.524 qpair failed and we were unable to recover it. 00:37:17.524 [2024-07-13 22:20:36.811481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.524 [2024-07-13 22:20:36.811513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.524 qpair failed and we were unable to recover it. 00:37:17.524 [2024-07-13 22:20:36.811698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.524 [2024-07-13 22:20:36.811730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.524 qpair failed and we were unable to recover it. 00:37:17.524 [2024-07-13 22:20:36.811949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.524 [2024-07-13 22:20:36.811982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.524 qpair failed and we were unable to recover it. 00:37:17.524 [2024-07-13 22:20:36.812150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.524 [2024-07-13 22:20:36.812183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.524 qpair failed and we were unable to recover it. 00:37:17.524 [2024-07-13 22:20:36.812363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.524 [2024-07-13 22:20:36.812395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.524 qpair failed and we were unable to recover it. 00:37:17.524 [2024-07-13 22:20:36.812557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.524 [2024-07-13 22:20:36.812590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.524 qpair failed and we were unable to recover it. 00:37:17.524 [2024-07-13 22:20:36.812745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.524 [2024-07-13 22:20:36.812778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.524 qpair failed and we were unable to recover it. 00:37:17.524 [2024-07-13 22:20:36.812965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.524 [2024-07-13 22:20:36.812999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.524 qpair failed and we were unable to recover it. 00:37:17.524 [2024-07-13 22:20:36.813189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.524 [2024-07-13 22:20:36.813222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.524 qpair failed and we were unable to recover it. 00:37:17.524 [2024-07-13 22:20:36.813407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.524 [2024-07-13 22:20:36.813440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.524 qpair failed and we were unable to recover it. 00:37:17.524 [2024-07-13 22:20:36.813639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.524 [2024-07-13 22:20:36.813671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.524 qpair failed and we were unable to recover it. 00:37:17.524 [2024-07-13 22:20:36.813837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.524 [2024-07-13 22:20:36.813876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.524 qpair failed and we were unable to recover it. 00:37:17.524 [2024-07-13 22:20:36.814061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.524 [2024-07-13 22:20:36.814094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.524 qpair failed and we were unable to recover it. 00:37:17.524 [2024-07-13 22:20:36.814258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.524 [2024-07-13 22:20:36.814291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.524 qpair failed and we were unable to recover it. 00:37:17.524 [2024-07-13 22:20:36.814451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.524 [2024-07-13 22:20:36.814483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.524 qpair failed and we were unable to recover it. 00:37:17.524 [2024-07-13 22:20:36.814673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.524 [2024-07-13 22:20:36.814706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.524 qpair failed and we were unable to recover it. 00:37:17.524 [2024-07-13 22:20:36.814896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.524 [2024-07-13 22:20:36.814929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.524 qpair failed and we were unable to recover it. 00:37:17.524 [2024-07-13 22:20:36.815118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.524 [2024-07-13 22:20:36.815151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.524 qpair failed and we were unable to recover it. 00:37:17.524 [2024-07-13 22:20:36.815320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.524 [2024-07-13 22:20:36.815353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.524 qpair failed and we were unable to recover it. 00:37:17.524 [2024-07-13 22:20:36.815512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.524 [2024-07-13 22:20:36.815545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.524 qpair failed and we were unable to recover it. 00:37:17.524 [2024-07-13 22:20:36.815705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.524 [2024-07-13 22:20:36.815737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.524 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.815925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.815957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.816119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.816150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.816308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.816345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.816538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.816570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.816737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.816769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.816958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.816990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.817171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.817203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.817401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.817433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.817722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.817754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.818024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.818057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.818221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.818255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.818424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.818457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.818626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.818659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.818814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.818845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.819014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.819047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.819233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.819272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.819436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.819469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.819655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.819686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.819856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.819911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.820099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.820131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.820295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.820328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.820537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.820570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.820731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.820764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.820954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.820988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.821139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.821171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.821357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.821389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.821605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.821637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.525 [2024-07-13 22:20:36.821800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.525 [2024-07-13 22:20:36.821832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.525 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.821991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.822024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.822210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.822242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.822450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.822482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.822669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.822700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.822860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.822898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.823059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.823092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.823252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.823284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.823449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.823480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.823645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.823677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.823891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.823935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.824091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.824124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.824287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.824319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.824502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.824533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.824719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.824750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.824941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.824980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.825168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.825200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.825355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.825388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.825573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.825605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.825770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.825802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.825956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.825989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.826151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.826184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.826379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.826412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.826566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.826597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.826786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.826819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.826986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.827018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.827204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.827236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.827398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.827429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.526 [2024-07-13 22:20:36.827596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.526 [2024-07-13 22:20:36.827629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.526 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.827864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.827902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.828063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.828095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.828284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.828317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.828501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.828533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.828700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.828733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.828945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.828978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.829169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.829200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.829381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.829413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.829576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.829608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.829776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.829808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.829997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.830030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.830194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.830227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.830439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.830471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.830629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.830661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.830848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.830885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.831047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.831078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.831252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.831284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.831466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.831498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.831668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.831702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.831877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.831909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.832076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.832109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.832319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.832351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.832539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.832572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.832733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.832765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.832957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.832989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.833167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.833198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.833409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.833445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.833611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.527 [2024-07-13 22:20:36.833643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.527 qpair failed and we were unable to recover it. 00:37:17.527 [2024-07-13 22:20:36.833834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.833872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.834055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.834088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.834277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.834308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.834490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.834523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.834680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.834712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.834888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.834921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.835097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.835129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.835287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.835320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.835505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.835537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.835692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.835725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.835907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.835940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.836104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.836136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.836333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.836366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.836556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.836589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.836768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.836800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.836988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.837022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.837186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.837218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.837377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.837420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.837603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.837635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.837798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.837830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.838022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.838054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.838241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.838273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.838461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.838494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.838682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.838716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.838881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.838914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.839074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.839107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.839324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.839355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.839513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.528 [2024-07-13 22:20:36.839545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.528 qpair failed and we were unable to recover it. 00:37:17.528 [2024-07-13 22:20:36.839715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.839748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.839935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.839968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.840131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.840164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.840348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.840379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.840570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.840602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.840792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.840825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.840994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.841027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.841251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.841284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.841474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.841506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.841697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.841728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.841904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.841941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.842099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.842130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.842313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.842345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.842505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.842537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.842725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.842758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.842951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.842984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.843171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.843202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.843363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.843393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.843582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.843614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.843813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.843845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.844013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.844046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.844207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.844240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.844417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.844449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.844650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.844681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.844873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.844906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.845078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.845110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.845293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.845326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.845504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.845536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.845731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.845763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.529 qpair failed and we were unable to recover it. 00:37:17.529 [2024-07-13 22:20:36.845957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.529 [2024-07-13 22:20:36.845990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.530 qpair failed and we were unable to recover it. 00:37:17.530 [2024-07-13 22:20:36.846153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.530 [2024-07-13 22:20:36.846186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.530 qpair failed and we were unable to recover it. 00:37:17.530 [2024-07-13 22:20:36.846379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.530 [2024-07-13 22:20:36.846418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.530 qpair failed and we were unable to recover it. 00:37:17.530 [2024-07-13 22:20:36.846580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.530 [2024-07-13 22:20:36.846614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.530 qpair failed and we were unable to recover it. 00:37:17.530 [2024-07-13 22:20:36.846799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.530 [2024-07-13 22:20:36.846832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.530 qpair failed and we were unable to recover it. 00:37:17.530 [2024-07-13 22:20:36.847009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.530 [2024-07-13 22:20:36.847040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.530 qpair failed and we were unable to recover it. 00:37:17.530 [2024-07-13 22:20:36.847234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.530 [2024-07-13 22:20:36.847266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.530 qpair failed and we were unable to recover it. 00:37:17.530 [2024-07-13 22:20:36.847434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.530 [2024-07-13 22:20:36.847466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.530 qpair failed and we were unable to recover it. 00:37:17.530 [2024-07-13 22:20:36.847657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.530 [2024-07-13 22:20:36.847690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.530 qpair failed and we were unable to recover it. 00:37:17.530 [2024-07-13 22:20:36.847877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.530 [2024-07-13 22:20:36.847911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.530 qpair failed and we were unable to recover it. 00:37:17.530 [2024-07-13 22:20:36.848130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.530 [2024-07-13 22:20:36.848163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.530 qpair failed and we were unable to recover it. 00:37:17.530 [2024-07-13 22:20:36.848346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.530 [2024-07-13 22:20:36.848379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.530 qpair failed and we were unable to recover it. 00:37:17.530 [2024-07-13 22:20:36.848547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.530 [2024-07-13 22:20:36.848580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.530 qpair failed and we were unable to recover it. 00:37:17.530 [2024-07-13 22:20:36.848763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.530 [2024-07-13 22:20:36.848795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.530 qpair failed and we were unable to recover it. 00:37:17.530 [2024-07-13 22:20:36.848962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.530 [2024-07-13 22:20:36.848994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.530 qpair failed and we were unable to recover it. 00:37:17.530 [2024-07-13 22:20:36.849156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.530 [2024-07-13 22:20:36.849188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.530 qpair failed and we were unable to recover it. 00:37:17.530 [2024-07-13 22:20:36.849345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.530 [2024-07-13 22:20:36.849378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.530 qpair failed and we were unable to recover it. 00:37:17.530 [2024-07-13 22:20:36.849572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.530 [2024-07-13 22:20:36.849611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.530 qpair failed and we were unable to recover it. 00:37:17.530 [2024-07-13 22:20:36.849775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.530 [2024-07-13 22:20:36.849808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.530 qpair failed and we were unable to recover it. 00:37:17.530 [2024-07-13 22:20:36.850005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.530 [2024-07-13 22:20:36.850038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.530 qpair failed and we were unable to recover it. 00:37:17.530 [2024-07-13 22:20:36.850208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.530 [2024-07-13 22:20:36.850241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.530 qpair failed and we were unable to recover it. 00:37:17.530 [2024-07-13 22:20:36.850401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.530 [2024-07-13 22:20:36.850438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.530 qpair failed and we were unable to recover it. 00:37:17.530 [2024-07-13 22:20:36.850601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.530 [2024-07-13 22:20:36.850635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.530 qpair failed and we were unable to recover it. 00:37:17.530 [2024-07-13 22:20:36.850819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.530 [2024-07-13 22:20:36.850852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.530 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.851074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.851117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.851337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.851369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.851520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.851552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.851741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.851773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.851944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.851977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.852133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.852166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.852375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.852408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.852622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.852655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.852816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.852859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.853025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.853058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.853215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.853248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.853438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.853470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.853642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.853676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.853861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.853902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.854066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.854099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.854264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.854311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.854496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.854529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.854742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.854775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.855041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.855083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.855268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.855301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.855458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.855490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.855672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.855705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.855917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.855951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.856161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.856208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.856390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.856427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.856584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.856616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.856801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.856834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.857067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.857100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.857265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.857297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.857461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.531 [2024-07-13 22:20:36.857494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.531 qpair failed and we were unable to recover it. 00:37:17.531 [2024-07-13 22:20:36.857650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.532 [2024-07-13 22:20:36.857696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.532 qpair failed and we were unable to recover it. 00:37:17.532 [2024-07-13 22:20:36.857899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.532 [2024-07-13 22:20:36.857942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.532 qpair failed and we were unable to recover it. 00:37:17.532 [2024-07-13 22:20:36.858128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.532 [2024-07-13 22:20:36.858170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.532 qpair failed and we were unable to recover it. 00:37:17.811 [2024-07-13 22:20:36.858335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.811 [2024-07-13 22:20:36.858368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.811 qpair failed and we were unable to recover it. 00:37:17.811 [2024-07-13 22:20:36.858524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.811 [2024-07-13 22:20:36.858557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.858751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.858783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.858973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.859005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.859170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.859208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.859374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.859406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.859598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.859631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.859821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.859855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.860042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.860075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.860251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.860297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.860474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.860508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.860700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.860734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.860904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.860938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.861104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.861148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.861306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.861339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.861504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.861535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.861696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.861729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.861947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.861985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.862156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.862188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.862371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.862409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.862621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.862664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.862872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.862906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.863096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.863129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.863317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.863351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.863561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.863592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.863757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.863790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.863973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.864006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.864199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.864232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.864430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.864462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.864626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.864659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.864821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.864853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.865042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.865085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.865255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.865288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.865451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.865483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.865645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.865677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.865837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.865878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.866042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.866073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.866266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.866298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.866488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.866520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.866700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.866732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.866948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.866981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.812 [2024-07-13 22:20:36.867156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.812 [2024-07-13 22:20:36.867188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.812 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.867373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.867405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.867592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.867623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.867807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.867844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.868048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.868080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.868253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.868285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.868466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.868498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.868685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.868718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.868940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.868973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.869185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.869216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.869428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.869459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.869656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.869689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.869921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.869954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.870131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.870171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.870332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.870365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.870551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.870585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.870797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.870830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.871029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.871062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.871276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.871308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.871516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.871548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.871702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.871733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.871947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.871980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.872136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.872170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.872343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.872375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.872559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.872590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.872784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.872824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.873096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.873143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.873362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.873398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.873588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.873620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.873817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.873850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.874049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.874081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.874294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.874346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.874550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.874596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.874815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.874863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.875066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.875112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.875359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.875394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.875605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.875637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.875827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.875859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.876075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.876115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.876315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.876362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.876557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.876604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.813 qpair failed and we were unable to recover it. 00:37:17.813 [2024-07-13 22:20:36.876792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.813 [2024-07-13 22:20:36.876838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.877102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.877137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.877324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.877357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.877552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.877585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.877752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.877783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.877954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.878000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.878216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.878262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.878484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.878530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.878718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.878752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.878914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.878947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.879148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.879182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.879350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.879382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.879571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.879604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.879817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.879891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.880107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.880155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.880353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.880400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.880620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.880654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.880837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.880877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.881061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.881094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.881274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.881306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.881529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.881576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.881791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.881838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.882035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.882081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.882299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.882333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.882518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.882550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.882744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.882775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.882963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.882996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.883159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.883206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.883485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.883531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.883723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.883776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.884042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.884077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.884253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.884286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.884474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.884507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.884698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.884730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.884917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.884949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.885107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.885152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.885341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.885386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.885602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.885648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.885876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.885925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.886155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.886189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.886373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.814 [2024-07-13 22:20:36.886405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.814 qpair failed and we were unable to recover it. 00:37:17.814 [2024-07-13 22:20:36.886554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.886586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.886738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.886769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.886939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.886985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.887201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.887248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.887465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.887513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.887731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.887766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.887930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.887963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.888153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.888186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.888375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.888407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.888642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.888675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.888863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.888925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.889152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.889199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.889383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.889430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.889648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.889682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.889902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.889935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.890131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.890163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.890345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.890377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.890555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.890594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.890814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.890859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.891079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.891126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.891347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.891394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.891608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.891643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.891824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.891861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.892034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.892066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.892265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.892299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.892476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.892523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.892712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.892763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.893006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.893058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.893258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.893300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.893479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.893515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.893714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.893747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.893908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.893943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.894100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.894135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.894329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.894363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.894531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.894566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.894765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.894798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.895007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.895041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.895254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.895289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.895482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.895516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.895687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.895721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.815 [2024-07-13 22:20:36.895886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.815 [2024-07-13 22:20:36.895921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.815 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.896088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.896123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.896336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.896371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.896565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.896599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.896861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.896920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.897100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.897135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.897310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.897344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.897512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.897545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.897715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.897748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.897925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.897959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.898150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.898194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.898349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.898382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.898591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.898624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.898786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.898818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.899028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.899063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.899295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.899331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.899548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.899581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.899758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.899794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.900046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.900080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.900280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.900314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.900500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.900533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.900751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.900784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.900962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.900996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.901181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.901226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.901452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.901485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.901682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.901715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.901955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.901989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.902203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.902237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.902422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.902468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.902655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.902689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.902919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.902955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.903164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.903209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.903390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.903433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.903647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.816 [2024-07-13 22:20:36.903680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.816 qpair failed and we were unable to recover it. 00:37:17.816 [2024-07-13 22:20:36.903847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.903889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.904082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.904115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.904290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.904325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.904516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.904549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.904732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.904765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.904944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.904978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.905171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.905204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.905417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.905450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.905642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.905676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.905863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.905903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.906170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.906204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.906417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.906450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.906643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.906677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.906864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.906903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.907072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.907105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.907296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.907328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.907537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.907570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.907734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.907767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.907978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.908011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.908195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.908236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.908395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.908428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.908616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.908650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.908844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.908896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.909085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.909118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.909296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.909328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.909528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.909560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.909748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.909781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.909948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.909981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.910169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.910202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.910396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.910428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.910595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.910628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.910826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.910864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.911063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.911095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.911280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.911312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.911522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.911560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.911770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.911803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.911989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.912024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.912191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.912224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.912390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.912422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.912593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.912625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.817 [2024-07-13 22:20:36.912790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.817 [2024-07-13 22:20:36.912822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.817 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.913010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.913045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.913244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.913278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.913442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.913476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.913645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.913678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.913836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.913886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.914058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.914091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.914255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.914289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.914482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.914514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.914679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.914713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.914924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.914956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.915145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.915177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.915415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.915448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.915661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.915694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.915863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.915903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.916059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.916092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.916288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.916322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.916485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.916519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.916678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.916710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.916878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.916910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.917095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.917128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.917319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.917362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.917575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.917608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.917798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.917831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.918022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.918055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.918252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.918284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.918474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.918506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.918694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.918727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.918942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.918976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.919164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.919197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.919387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.919420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.919600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.919634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.919827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.919860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.920036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.920068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.920240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.920279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.920496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.920528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.920711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.920743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.920928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.920962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.921160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.921193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.921379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.921412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.921593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.818 [2024-07-13 22:20:36.921625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.818 qpair failed and we were unable to recover it. 00:37:17.818 [2024-07-13 22:20:36.921809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.921842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.922040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.922074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.922258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.922291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.922480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.922513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.922698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.922731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.922924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.922957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.923122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.923156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.923348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.923381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.923563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.923596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.923786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.923819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.923988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.924021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.924205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.924237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.924424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.924456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.924638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.924670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.924861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.924899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.925092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.925126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.925317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.925350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.925519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.925552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.925739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.925771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.925933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.925966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.926192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.926225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.926416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.926449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.926618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.926651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.926882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.926928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.927112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.927147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.927314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.927348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.927532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.927567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.927799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.927841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.928064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.928098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.928300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.928333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.928489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.928522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.928711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.928745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.928945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.928979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.929169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.929207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.929371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.929404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.929562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.929595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.929787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.929820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.929991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.930024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.930207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.930238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.930427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.930459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.930617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.819 [2024-07-13 22:20:36.930650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.819 qpair failed and we were unable to recover it. 00:37:17.819 [2024-07-13 22:20:36.930863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.930901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.931068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.931101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.931359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.931393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.931601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.931644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.931841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.931879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.932068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.932100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.932339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.932372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.932587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.932619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.932773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.932806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.933036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.933069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.933231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.933263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.933451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.933484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.933666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.933698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.933909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.933942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.934130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.934162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.934326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.934359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.934515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.934548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.934705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.934737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.934890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.934923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.935108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.935140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.935301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.935333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.935499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.935530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.935713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.935745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.935942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.935975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.936159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.936191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.936353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.936385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.936572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.936605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.936760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.936792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.936990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.937023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.937226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.937259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.937420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.937453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.937673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.937705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.937904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.937942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.938151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.938182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.820 [2024-07-13 22:20:36.938349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.820 [2024-07-13 22:20:36.938381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.820 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.938591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.938623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.938792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.938826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.939025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.939058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.939220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.939252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.939465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.939497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.939703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.939735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.939919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.939952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.940161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.940193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.940379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.940411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.940629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.940662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.940826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.940859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.941059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.941091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.941310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.941343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.941503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.941534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.941729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.941762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.941953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.941986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.942147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.942179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.942368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.942401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.942559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.942591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.942807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.942840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.943052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.943084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.943272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.943305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.943495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.943527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.943714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.943747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.943966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.943999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.944172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.944204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.944365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.944397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.944580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.944612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.944769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.944800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.945005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.945038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.945224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.945257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.945414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.945446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.945652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.945716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.945906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.945940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.946097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.946129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.946291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.946324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.946507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.946540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.946725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.946761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.946947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.946980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.947134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.821 [2024-07-13 22:20:36.947165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.821 qpair failed and we were unable to recover it. 00:37:17.821 [2024-07-13 22:20:36.947343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.947376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.947565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.947597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.947766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.947799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.948002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.948035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.948224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.948256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.948445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.948477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.948634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.948666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.948848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.948886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.949052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.949085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.949252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.949286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.949499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.949531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.949703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.949735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.949935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.949969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.950166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.950199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.950359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.950392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.950576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.950609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.950772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.950804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.950994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.951028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.951212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.951245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.951406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.951439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.951628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.951659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.951818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.951850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.952042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.952075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.952287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.952319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.952510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.952543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.952733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.952766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.952932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.952965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.953140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.953172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.953359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.953392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.953576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.953609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.953794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.953827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.954050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.954083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.954274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.954307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.954525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.954559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.954742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.954775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.954960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.954992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.955176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.955208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.955371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.955409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.955599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.955631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.955803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.955836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.956038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.822 [2024-07-13 22:20:36.956072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.822 qpair failed and we were unable to recover it. 00:37:17.822 [2024-07-13 22:20:36.956227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.956261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.956476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.956509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.956699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.956730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.956914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.956947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.957159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.957192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.957356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.957388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.957579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.957611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.957772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.957804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.957994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.958026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.958189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.958222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.958437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.958469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.958657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.958690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.958882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.958916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.959103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.959135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.959325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.959358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.959520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.959562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.959728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.959760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.959954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.959989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.960155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.960187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.960379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.960411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.960578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.960611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.960799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.960832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.960991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.961024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.961318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.961351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.961537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.961570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.961738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.961771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.961983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.962016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.962205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.962239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.962397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.962431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.962641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.962674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.962838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.962878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.963067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.963100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.963257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.963289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.963501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.963534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.963723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.963755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.963949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.963983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.964195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.964232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.964446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.964479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.964649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.964682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.964897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.964930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.965143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.823 [2024-07-13 22:20:36.965176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.823 qpair failed and we were unable to recover it. 00:37:17.823 [2024-07-13 22:20:36.965365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.965398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.965584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.965617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.965898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.965931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.966097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.966130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.966319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.966351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.966541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.966574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.966729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.966761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.966951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.966984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.967168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.967201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.967365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.967397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.967569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.967601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.967781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.967813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.968025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.968059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.968228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.968260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.968419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.968451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.968667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.968700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.968892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.968925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.969111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.969143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.969357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.969389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.969607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.969641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.969834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.969873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.970040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.970073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.970267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.970300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.970485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.970518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.970706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.970738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.970890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.970923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.971088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.971122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.971281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.971314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.971478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.971510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.971697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.971728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.971954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.971987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.972175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.972208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.972423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.972456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.972609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.972641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.824 qpair failed and we were unable to recover it. 00:37:17.824 [2024-07-13 22:20:36.972830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.824 [2024-07-13 22:20:36.972864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.973063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.973100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.973288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.973319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.973512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.973545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.973757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.973801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.973989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.974021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.974203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.974236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.974433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.974465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.974667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.974698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.974890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.974923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.975108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.975140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.975357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.975389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.975548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.975580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.975739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.975772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.975984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.976017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.976174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.976206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.976391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.976423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.976584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.976615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.976825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.976858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.977085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.977122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.977338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.977371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.977559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.977591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.977787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.977819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.977989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.978022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.978192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.978225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.978384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.978416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.978599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.978632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.978795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.978828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.979020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.979054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.979248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.979280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.979442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.979474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.979678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.979709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.979897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.979931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.980119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.980151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.980358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.980390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.980603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.980635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.980799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.980832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.981027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.981059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.981246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.981277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.981441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.981475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.981662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.825 [2024-07-13 22:20:36.981695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.825 qpair failed and we were unable to recover it. 00:37:17.825 [2024-07-13 22:20:36.981883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.981921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.982119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.982152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.982333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.982364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.982531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.982563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.982760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.982793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.982985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.983018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.983201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.983233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.983413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.983445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.983598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.983631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.983853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.983893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.984056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.984088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.984250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.984283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.984463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.984495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.984685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.984717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.984897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.984930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.985121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.985153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.985341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.985374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.985559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.985591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.985782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.985815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.986008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.986040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.986195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.986226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.986438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.986471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.986635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.986668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.986829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.986863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.987058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.987091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.987271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.987303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.987475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.987508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.987706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.987749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.987938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.987970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.988157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.988189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.988350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.988382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.988545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.988578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.988790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.988823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.989049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.989082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.989269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.989302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.989467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.989499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.989682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.989714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.989879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.989913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.990104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.990137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.990351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.990383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.826 [2024-07-13 22:20:36.990534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.826 [2024-07-13 22:20:36.990566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.826 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.990778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.990810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.991012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.991045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.991236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.991268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.991444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.991476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.991632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.991665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.991849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.991889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.992067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.992101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.992291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.992322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.992508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.992540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.992756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.992787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.992991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.993024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.993182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.993216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.993408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.993440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.993660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.993693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.993841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.993879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.994084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.994116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.994323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.994356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.994541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.994573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.994774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.994807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.995008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.995041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.995196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.995228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.995417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.995450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.995633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.995665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.995847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.995892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.996059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.996091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.996301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.996334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.996542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.996579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.996738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.996772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.996934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.996967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.997128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.997159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.997347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.997379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.997574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.997608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.997771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.997803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.998015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.998048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.998232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.998264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.998417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.998448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.998632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.998665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.998827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.998858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.999027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.999059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.999224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.999259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.827 qpair failed and we were unable to recover it. 00:37:17.827 [2024-07-13 22:20:36.999446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.827 [2024-07-13 22:20:36.999480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:36.999663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:36.999694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:36.999892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:36.999927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.000079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.000112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.000300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.000333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.000524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.000556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.000749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.000781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.000975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.001007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.001192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.001223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.001414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.001447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.001637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.001681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.001872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.001905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.002120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.002153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.002322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.002354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.002546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.002578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.002773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.002805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.002965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.002998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.003163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.003196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.003363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.003395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.003562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.003594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.003776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.003808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.003977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.004011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.004172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.004205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.004369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.004402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.004593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.004625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.004808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.004846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.005067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.005104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.005292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.005326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.005512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.005544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.005711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.005744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.005950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.005983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.006168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.006201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.006388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.006421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.006615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.006647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.006833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.006871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.007087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.007120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.007308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.007341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.007529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.007561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.007774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.007806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.007970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.008003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.008186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.828 [2024-07-13 22:20:37.008218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.828 qpair failed and we were unable to recover it. 00:37:17.828 [2024-07-13 22:20:37.008379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.008411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.008569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.008602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.008789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.008823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.009046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.009079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.009243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.009275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.009464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.009497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.009711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.009743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.009930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.009963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.010145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.010178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.010365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.010398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.010565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.010597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.010759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.010792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.010988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.011021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.011191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.011224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.011384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.011416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.011622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.011656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.011873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.011923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.012118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.012149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.012358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.012391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.012569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.012601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.012782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.012815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.012981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.013014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.013183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.013215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.013429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.013462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.013620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.013652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.013812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.013848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.014022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.014055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.014241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.014274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.014437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.014469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.014648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.014680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.014891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.014923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.015187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.015220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.015467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.015501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.015683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.015726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.015915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.829 [2024-07-13 22:20:37.015949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.829 qpair failed and we were unable to recover it. 00:37:17.829 [2024-07-13 22:20:37.016139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.016171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.016363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.016395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.016602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.016634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.016814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.016846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.017123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.017155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.017341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.017374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.017560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.017595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.017809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.017842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.018033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.018065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.018252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.018284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.018497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.018529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.018718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.018751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.018960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.018994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.019184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.019218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.019402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.019435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.019598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.019631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.019792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.019824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.019995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.020027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.020199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.020232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.020423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.020457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.020610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.020642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.020825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.020858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.021038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.021070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.021259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.021292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.021479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.021512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.021696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.021728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.021918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.021951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.022138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.022172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.022336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.022368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.022528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.022562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.022753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.022790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.023002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.023036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.023223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.023255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.023441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.023474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.023638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.023670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.023881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.023914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.024131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.024164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.024351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.024384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.024572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.024604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.024773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.024807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.024998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.830 [2024-07-13 22:20:37.025030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.830 qpair failed and we were unable to recover it. 00:37:17.830 [2024-07-13 22:20:37.025188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.025220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.025380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.025413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.025599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.025631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.025796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.025829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.025996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.026030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.026243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.026275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.026431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.026464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.026624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.026657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.026815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.026847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.027046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.027078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.027232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.027264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.027449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.027481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.027665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.027697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.027891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.027925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.028116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.028149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.028336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.028368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.028553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.028586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.028769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.028800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.028987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.029020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.029204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.029235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.029412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.029446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.029600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.029643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.029830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.029863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.030035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.030067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.030282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.030314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.030475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.030507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.030697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.030730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.030919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.030954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.031107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.031140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.031325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.031361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.031524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.031557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.031716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.031748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.031959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.031991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.032159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.032192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.032349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.032382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.032596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.032628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.032816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.032882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.033047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.033078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.033260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.033292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.033461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.831 [2024-07-13 22:20:37.033494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.831 qpair failed and we were unable to recover it. 00:37:17.831 [2024-07-13 22:20:37.033687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.033721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.033916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.033950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.034140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.034173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.034338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.034370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.034561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.034593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.034775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.034808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.035003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.035036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.035221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.035254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.035431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.035464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.035685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.035718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.035877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.035911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.036097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.036129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.036340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.036373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.036558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.036591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.036770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.036803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.036968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.037001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.037224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.037257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.037465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.037497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.037719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.037751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.037913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.037946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.038131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.038163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.038321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.038354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.038567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.038600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.038783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.038816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.038977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.039009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.039169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.039201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.039396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.039428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.039595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.039628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.039784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.039817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.040004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.040041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.040235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.040267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.040420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.040452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.040607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.040640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.040824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.040857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.041052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.041084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.041268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.041300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.041509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.041542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.041699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.041732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.041920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.041953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.042112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.042144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.832 [2024-07-13 22:20:37.042360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.832 [2024-07-13 22:20:37.042392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.832 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.042586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.042618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.042809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.042840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.043033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.043067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.043261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.043294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.043506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.043554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.043726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.043759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.043974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.044008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.044175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.044208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.044400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.044433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.044616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.044648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.044861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.044900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.045073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.045105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.045294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.045327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.045514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.045546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.045729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.045761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.045955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.045988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.046199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.046231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.046398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.046432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.046592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.046625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.046789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.046821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.047039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.047072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.047239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.047272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.047457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.047490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.047656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.047689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.047858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.047896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.048086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.048120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.048310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.048343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.048556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.048588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.048770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.048806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.049025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.049058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.049219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.049251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.049406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.049438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.049634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.049666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.049854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.049894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.050103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.833 [2024-07-13 22:20:37.050135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.833 qpair failed and we were unable to recover it. 00:37:17.833 [2024-07-13 22:20:37.050344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.050376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.050537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.050569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.050779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.050811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.050973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.051006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.051176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.051208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.051418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.051450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.051637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.051669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.051833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.051884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.052098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.052130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.052285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.052318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.052510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.052543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.052696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.052729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.052918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.052952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.053142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.053175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.053338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.053372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.053563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.053595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.053754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.053787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.053972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.054006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.054163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.054195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.054378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.054410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.054574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.054607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.054799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.054832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.055032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.055065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.055249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.055282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.055439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.055473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.055671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.055705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.055902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.055935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.056147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.056179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.056372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.056405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.056590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.056623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.056809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.056842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.057007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.057040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.057232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.057265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.057423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.057470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.057665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.057698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.057884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.057918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.058076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.058111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.058301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.058334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.058524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.058557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.058721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.058756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.834 [2024-07-13 22:20:37.058920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.834 [2024-07-13 22:20:37.058953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.834 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.059139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.059172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.059326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.059358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.059542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.059576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.059765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.059798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.059953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.059986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.060143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.060176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.060363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.060401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.060565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.060598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.060765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.060798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.060956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.060989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.061172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.061205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.061417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.061449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.061665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.061698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.061882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.061916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.062096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.062129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.062290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.062323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.062538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.062571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.062759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.062793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.063007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.063041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.063231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.063264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.063421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.063454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.063618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.063650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.063812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.063844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.064066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.064099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.064282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.064315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.064500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.064533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.064721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.064753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.064925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.064958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.065137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.065169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.065361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.065394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.065558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.065590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.065785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.065818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.065982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.066020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.066190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.066222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.066408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.066441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.066654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.066686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.066880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.066912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.067078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.067110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.067279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.067311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.067496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.067529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.835 qpair failed and we were unable to recover it. 00:37:17.835 [2024-07-13 22:20:37.067735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.835 [2024-07-13 22:20:37.067768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.067956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.067990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.068176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.068208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.068386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.068418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.068576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.068609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.068775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.068808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.068994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.069029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.069192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.069225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.069438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.069471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.069636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.069669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.069850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.069888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.070099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.070131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.070292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.070326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.070493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.070526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.070683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.070716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.070937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.070971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.071162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.071204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.071397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.071430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.071637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.071670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.071870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.071902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.072086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.072118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.072283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.072317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.072511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.072545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.072732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.072765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.072931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.072963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.073128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.073161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.073354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.073387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.073580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.073612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.073826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.073859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.074033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.074065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.074252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.074284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.074468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.074501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.074703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.074740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.074951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.074984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.075142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.075176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.075331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.075364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.075552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.075585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.075805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.075838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.076033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.076066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.076258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.076291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.076480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.836 [2024-07-13 22:20:37.076513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.836 qpair failed and we were unable to recover it. 00:37:17.836 [2024-07-13 22:20:37.076679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.076711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.076893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.076926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.077113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.077146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.077309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.077342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.077531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.077564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.077728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.077760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.077955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.077987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.078171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.078203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.078415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.078448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.078634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.078667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.078860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.078899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.079085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.079117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.079301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.079333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.079528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.079560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.079753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.079785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.079973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.080005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.080194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.080228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.080385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.080419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.080619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.080652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.080837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.080881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.081052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.081084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.081278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.081310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.081500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.081534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.081723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.081756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.081966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.082000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.082192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.082225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.082393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.082425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.082548] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:17.837 [2024-07-13 22:20:37.082593] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:17.837 [2024-07-13 22:20:37.082618] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:17.837 [2024-07-13 22:20:37.082636] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:17.837 [2024-07-13 22:20:37.082636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.082656] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:17.837 [2024-07-13 22:20:37.082667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.082772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:37:17.837 [2024-07-13 22:20:37.082809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:37:17.837 [2024-07-13 22:20:37.082852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.082826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:37:17.837 [2024-07-13 22:20:37.082890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe8[2024-07-13 22:20:37.082835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:37:17.837 0 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.083090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.083122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.083283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.083315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.083526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.837 [2024-07-13 22:20:37.083559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.837 qpair failed and we were unable to recover it. 00:37:17.837 [2024-07-13 22:20:37.083723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.083756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.083941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.083974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.084138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.084170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.084360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.084393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.084562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.084595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.084759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.084792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.085011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.085045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.085240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.085284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.085471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.085503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.085691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.085723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.085900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.085934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.086139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.086172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.086361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.086395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.086558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.086592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.086777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.086810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.087000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.087033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.087210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.087242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.087454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.087487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.087668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.087702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.087889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.087922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.088093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.088126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.088319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.088357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.088611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.088644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.088816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.088848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.089040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.089073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.089235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.089268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.089427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.089459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.089632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.089666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.089827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.089860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.090030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.090062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.090244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.090276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.090461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.090493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.090661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.090693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.090886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.090919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.091081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.091114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.091300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.091332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.091497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.091533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.091794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.091827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.092022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.092054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.092244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.092278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.838 [2024-07-13 22:20:37.092443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.838 [2024-07-13 22:20:37.092476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.838 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.092681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.092715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.092908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.092941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.093113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.093146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.093402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.093436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.093630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.093663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.093818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.093851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.094044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.094077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.094241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.094274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.094447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.094481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.094704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.094738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.094938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.094971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.095132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.095165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.095340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.095372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.095557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.095591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.095778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.095811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.096004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.096037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.096202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.096235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.096399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.096432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.096624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.096656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.096817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.096850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.097062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.097096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.097281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.097314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.097505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.097539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.097718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.097751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.098034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.098068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.098261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.098294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.098464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.098496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.098649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.098682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.098846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.098885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.099080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.099114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.099298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.099347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.099500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.099533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.099721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.099754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.099911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.099944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.100142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.100174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.100337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.100374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.100560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.100593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.100745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.100778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.100946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.100980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.101170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.101202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.839 [2024-07-13 22:20:37.101360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.839 [2024-07-13 22:20:37.101393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.839 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.101581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.101613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.101800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.101832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.101993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.102026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.102198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.102230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.102394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.102427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.102591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.102626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.102781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.102813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.102982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.103015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.103173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.103206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.103365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.103398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.103597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.103629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.103790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.103823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.103984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.104018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.104208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.104241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.104420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.104453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.104623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.104656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.104811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.104843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.105028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.105061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.105225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.105258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.105446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.105480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.105659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.105692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.105848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.105889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.106051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.106084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.106244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.106277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.106437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.106469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.106656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.106688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.106853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.106893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.107081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.107114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.107306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.107339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.107499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.107532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.107720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.107753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.107920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.107954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.108141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.108174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.108328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.108361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.108534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.108571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.108756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.108788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.108978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.109011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.109187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.109220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.109383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.109416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.109622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.109655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.840 qpair failed and we were unable to recover it. 00:37:17.840 [2024-07-13 22:20:37.109855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.840 [2024-07-13 22:20:37.109893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.110070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.110103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.110293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.110332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.110503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.110536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.110707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.110740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.110924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.110958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.111117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.111149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.111317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.111348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.111526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.111558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.111747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.111781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.111989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.112023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.112176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.112208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.112374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.112407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.112595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.112639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.112836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.112884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.113060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.113095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.113263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.113295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.113479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.113511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.113675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.113708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.113899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.113933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.114118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.114151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.114342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.114374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.114544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.114577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.114737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.114769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.114929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.114964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.115130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.115162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.115354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.115386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.115568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.115601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.115786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.115819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.116015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.116048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.116219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.116252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.116404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.116444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.116600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.116633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.116789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.116821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.117028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.117064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.117224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.117256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.841 qpair failed and we were unable to recover it. 00:37:17.841 [2024-07-13 22:20:37.117432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.841 [2024-07-13 22:20:37.117465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.117631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.117663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.117843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.117889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.118084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.118116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.118314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.118346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.118503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.118536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.118698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.118731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.118904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.118938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.119108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.119141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.119361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.119399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.119588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.119620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.119777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.119810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.119997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.120031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.120218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.120251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.120429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.120461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.120634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.120666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.120834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.120884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.121046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.121079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.121258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.121291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.121520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.121555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.121730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.121763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.121983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.122016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.122185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.122228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.122418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.122451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.122626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.122658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.122821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.122861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.123049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.123081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.123249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.123281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.123475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.123507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.123694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.123726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.123919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.123953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.124135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.124168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.124327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.124366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.124534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.124566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.124732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.124764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.124942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.124977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.125157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.125190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.125388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.125420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.125609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.125647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.125808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.842 [2024-07-13 22:20:37.125840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.842 qpair failed and we were unable to recover it. 00:37:17.842 [2024-07-13 22:20:37.126054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.126086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.126249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.126292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.126448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.126483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.126667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.126711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.126874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.126908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.127069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.127102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.127276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.127313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.127512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.127543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.127707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.127740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.127906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.127941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.128130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.128163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.128343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.128377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.128563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.128596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.128747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.128779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.128958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.128992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.129149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.129187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.129371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.129405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.129577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.129620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.129801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.129833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.130037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.130069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.130250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.130282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.130477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.130511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.130678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.130711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.130876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.130910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.131101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.131134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.131337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.131371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.131565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.131598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.131815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.131859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.132035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.132068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.132248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.132282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.132484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.132517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.132681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.132713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.132901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.132933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.133126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.133158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.133363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.133396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.133609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.133642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.133825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.133861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.134057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.134092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.134267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.134304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.134473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.134509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.843 [2024-07-13 22:20:37.134706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.843 [2024-07-13 22:20:37.134739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.843 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.134903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.134937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.135105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.135140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.135385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.135419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.135581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.135614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.135786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.135818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.136026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.136059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.136248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.136281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.136478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.136510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.136669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.136702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.136892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.136926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.137092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.137125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.137306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.137339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.137507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.137542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.137717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.137750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.137971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.138024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.138221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.138272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.138479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.138516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.138683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.138718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.138895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.138931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.139103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.139137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.139383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.139418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.139601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.139635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.139829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.139878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.140045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.140078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.140281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.140316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.140492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.140526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.140742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.140775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.140963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.140998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.141160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.141194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.141362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.141395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.141549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.141583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.141767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.141801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.141983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.142017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.142190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.142236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.142432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.142467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.142670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.142702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.142877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.142914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.143075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.143113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.143289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.143321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.143490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.143522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.844 qpair failed and we were unable to recover it. 00:37:17.844 [2024-07-13 22:20:37.143717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.844 [2024-07-13 22:20:37.143750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.143959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.143992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.144179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.144211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.144439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.144472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.144663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.144707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.144897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.144931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.145132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.145176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.145364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.145395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.145552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.145584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.145769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.145802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.145990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.146023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.146222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.146255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.146448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.146481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.146689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.146722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.146910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.146944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.147128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.147171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.147353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.147385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.147542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.147575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.147765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.147799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.147982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.148015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.148176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.148209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.148373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.148406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.148596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.148629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.148794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.148828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.149066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.149118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.149313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.149350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.149541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.149575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.149764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.149797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.150010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.150045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.150234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.150267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.150469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.150501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.150693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.150732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.150912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.150945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.151126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.151159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.151379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.151415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.151610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.151655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.151842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.151893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.152060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.152099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.152265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.152307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.152490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.152522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.152736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.845 [2024-07-13 22:20:37.152769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.845 qpair failed and we were unable to recover it. 00:37:17.845 [2024-07-13 22:20:37.152986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.153021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.153243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.153275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.153472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.153505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.153695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.153730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.153922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.153956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.154128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.154161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.154362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.154396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.154570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.154613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.154777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.154810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.155012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.155046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.155248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.155282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.155486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.155520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.155742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.155776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.155953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.155987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.156188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.156223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.156387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.156420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.156586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.156619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.156823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.156864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.157034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.157068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.157260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.157299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.157489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.157521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.157678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.157719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.157913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.157946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.158169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.158229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.158424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.158460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.158640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.158675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.158832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.158871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.159059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.159093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.159288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.159322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.159481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.159525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.159691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.159726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.159904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.159938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.160107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.160141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.160334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.160367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.160549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.846 [2024-07-13 22:20:37.160582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.846 qpair failed and we were unable to recover it. 00:37:17.846 [2024-07-13 22:20:37.160746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.160779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.161094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.161133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.161318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.161350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.161542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.161576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.161761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.161794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.161983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.162017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.162188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.162223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.162382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.162415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.162572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.162607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.162838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.162886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.163083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.163117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.163329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.163363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.163571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.163603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.163771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.163805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.164006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.164040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.164207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.164249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.164471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.164505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.164717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.164751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.164981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.165014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.165200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.165233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.165427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.165460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.165648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.165682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.165877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.165911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.166103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.166136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.166325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.166359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.166520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.166554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.166745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.166778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.166989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.167023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.167342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.167395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.167574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.167611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.167790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.167825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.168065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.168101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.168258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.168302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.168504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.168547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.168732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.168765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.168956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.168990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.169186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.169218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.169410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.169444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.169668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.169703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.169904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.169938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.170140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.847 [2024-07-13 22:20:37.170184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.847 qpair failed and we were unable to recover it. 00:37:17.847 [2024-07-13 22:20:37.170373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.170408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.170604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.170638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.170825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.170875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.171054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.171089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.171267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.171309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.171467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.171501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.171663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.171697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.171921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.171956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.172121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.172156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.172351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.172383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.172545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.172579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.172764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.172808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.172984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.173018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.173213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.173251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.173419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.173452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.173626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.173660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.173828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.173874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.174054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.174088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.174295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.174330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.174553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.174586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.174774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.174806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.174998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.175032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.175213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.175290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.175485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.175520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.175686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.175720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.175934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.175968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.176142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.176185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.176371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.176408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.176612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.176646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.176839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.176887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.177079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.177112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.177295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.177327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.177541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.177580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.177782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.177817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.178022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.178055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.178236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.178271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.178434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.178468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.178660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.178695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.178871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.178907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.179062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.179096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.848 [2024-07-13 22:20:37.179262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.848 [2024-07-13 22:20:37.179294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.848 qpair failed and we were unable to recover it. 00:37:17.849 [2024-07-13 22:20:37.179511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.849 [2024-07-13 22:20:37.179546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.849 qpair failed and we were unable to recover it. 00:37:17.849 [2024-07-13 22:20:37.179733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.849 [2024-07-13 22:20:37.179768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.849 qpair failed and we were unable to recover it. 00:37:17.849 [2024-07-13 22:20:37.179940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.849 [2024-07-13 22:20:37.179975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.849 qpair failed and we were unable to recover it. 00:37:17.849 [2024-07-13 22:20:37.180133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.849 [2024-07-13 22:20:37.180172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.849 qpair failed and we were unable to recover it. 00:37:17.849 [2024-07-13 22:20:37.180376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.849 [2024-07-13 22:20:37.180410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.849 qpair failed and we were unable to recover it. 00:37:17.849 [2024-07-13 22:20:37.180590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.849 [2024-07-13 22:20:37.180623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.849 qpair failed and we were unable to recover it. 00:37:17.849 [2024-07-13 22:20:37.180852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.849 [2024-07-13 22:20:37.180911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.849 qpair failed and we were unable to recover it. 00:37:17.849 [2024-07-13 22:20:37.181096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.849 [2024-07-13 22:20:37.181130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.849 qpair failed and we were unable to recover it. 00:37:17.849 [2024-07-13 22:20:37.181303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.849 [2024-07-13 22:20:37.181337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.849 qpair failed and we were unable to recover it. 00:37:17.849 [2024-07-13 22:20:37.181505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.849 [2024-07-13 22:20:37.181537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.849 qpair failed and we were unable to recover it. 00:37:17.849 [2024-07-13 22:20:37.181725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.849 [2024-07-13 22:20:37.181757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.849 qpair failed and we were unable to recover it. 00:37:17.849 [2024-07-13 22:20:37.181928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.849 [2024-07-13 22:20:37.181962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.849 qpair failed and we were unable to recover it. 00:37:17.849 [2024-07-13 22:20:37.182155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.849 [2024-07-13 22:20:37.182189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.849 qpair failed and we were unable to recover it. 00:37:17.849 [2024-07-13 22:20:37.182363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.849 [2024-07-13 22:20:37.182397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.849 qpair failed and we were unable to recover it. 00:37:17.849 [2024-07-13 22:20:37.182589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.849 [2024-07-13 22:20:37.182622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.849 qpair failed and we were unable to recover it. 00:37:17.849 [2024-07-13 22:20:37.182789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.849 [2024-07-13 22:20:37.182822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.849 qpair failed and we were unable to recover it. 00:37:17.849 [2024-07-13 22:20:37.182992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.849 [2024-07-13 22:20:37.183025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.849 qpair failed and we were unable to recover it. 00:37:17.849 [2024-07-13 22:20:37.183190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.849 [2024-07-13 22:20:37.183224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.849 qpair failed and we were unable to recover it. 00:37:17.849 [2024-07-13 22:20:37.183394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.849 [2024-07-13 22:20:37.183428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.849 qpair failed and we were unable to recover it. 00:37:17.849 [2024-07-13 22:20:37.183632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.849 [2024-07-13 22:20:37.183666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.849 qpair failed and we were unable to recover it. 00:37:17.849 [2024-07-13 22:20:37.183852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.849 [2024-07-13 22:20:37.183895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.849 qpair failed and we were unable to recover it. 00:37:17.849 [2024-07-13 22:20:37.184056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.849 [2024-07-13 22:20:37.184090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.849 qpair failed and we were unable to recover it. 00:37:17.849 [2024-07-13 22:20:37.184271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.849 [2024-07-13 22:20:37.184304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.849 qpair failed and we were unable to recover it. 00:37:17.849 [2024-07-13 22:20:37.184502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:17.849 [2024-07-13 22:20:37.184535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:17.849 qpair failed and we were unable to recover it. 00:37:18.129 [2024-07-13 22:20:37.184722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.129 [2024-07-13 22:20:37.184757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.129 qpair failed and we were unable to recover it. 00:37:18.129 [2024-07-13 22:20:37.184929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.129 [2024-07-13 22:20:37.184964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.129 qpair failed and we were unable to recover it. 00:37:18.129 [2024-07-13 22:20:37.185128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.129 [2024-07-13 22:20:37.185165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.129 qpair failed and we were unable to recover it. 00:37:18.129 [2024-07-13 22:20:37.185332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.129 [2024-07-13 22:20:37.185364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.129 qpair failed and we were unable to recover it. 00:37:18.129 [2024-07-13 22:20:37.185522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.129 [2024-07-13 22:20:37.185554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.129 qpair failed and we were unable to recover it. 00:37:18.129 [2024-07-13 22:20:37.185722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.129 [2024-07-13 22:20:37.185763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.129 qpair failed and we were unable to recover it. 00:37:18.129 [2024-07-13 22:20:37.185957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.129 [2024-07-13 22:20:37.185992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.129 qpair failed and we were unable to recover it. 00:37:18.129 [2024-07-13 22:20:37.186268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.129 [2024-07-13 22:20:37.186302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.129 qpair failed and we were unable to recover it. 00:37:18.129 [2024-07-13 22:20:37.186493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.129 [2024-07-13 22:20:37.186527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.129 qpair failed and we were unable to recover it. 00:37:18.129 [2024-07-13 22:20:37.186717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.129 [2024-07-13 22:20:37.186750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.129 qpair failed and we were unable to recover it. 00:37:18.129 [2024-07-13 22:20:37.186938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.129 [2024-07-13 22:20:37.186972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.129 qpair failed and we were unable to recover it. 00:37:18.129 [2024-07-13 22:20:37.187144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.129 [2024-07-13 22:20:37.187176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.129 qpair failed and we were unable to recover it. 00:37:18.129 [2024-07-13 22:20:37.187366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.129 [2024-07-13 22:20:37.187399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.129 qpair failed and we were unable to recover it. 00:37:18.129 [2024-07-13 22:20:37.187585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.129 [2024-07-13 22:20:37.187619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.129 qpair failed and we were unable to recover it. 00:37:18.129 [2024-07-13 22:20:37.187814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.129 [2024-07-13 22:20:37.187848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.129 qpair failed and we were unable to recover it. 00:37:18.129 [2024-07-13 22:20:37.188049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.129 [2024-07-13 22:20:37.188083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.129 qpair failed and we were unable to recover it. 00:37:18.129 [2024-07-13 22:20:37.188254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.129 [2024-07-13 22:20:37.188287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.129 qpair failed and we were unable to recover it. 00:37:18.129 [2024-07-13 22:20:37.188469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.129 [2024-07-13 22:20:37.188502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.129 qpair failed and we were unable to recover it. 00:37:18.129 [2024-07-13 22:20:37.188678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.129 [2024-07-13 22:20:37.188710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.129 qpair failed and we were unable to recover it. 00:37:18.129 [2024-07-13 22:20:37.188873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.129 [2024-07-13 22:20:37.188906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.129 qpair failed and we were unable to recover it. 00:37:18.129 [2024-07-13 22:20:37.189134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.129 [2024-07-13 22:20:37.189168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.129 qpair failed and we were unable to recover it. 00:37:18.129 [2024-07-13 22:20:37.189329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.129 [2024-07-13 22:20:37.189378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.129 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.189575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.189609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.189794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.189829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.189997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.190031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.190188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.190222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.190409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.190442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.190624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.190657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.190812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.190845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.191018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.191051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.191340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.191373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.191582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.191615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.191814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.191848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.192042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.192076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.192240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.192272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.192461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.192494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.192672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.192705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.192885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.192920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.193089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.193123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.193306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.193340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.193497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.193530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.193702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.193736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.193956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.193995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.194162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.194197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.194371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.194404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.194582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.194617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.194811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.194845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.195067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.195101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.195270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.195303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.195464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.195497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.195655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.195690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.195876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.195910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.196104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.196138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.196295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.196330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.196519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.196552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.196768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.196801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.196983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.197017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.197227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.197261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.197450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.197484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.197644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.197678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.197842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.197884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.198077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.130 [2024-07-13 22:20:37.198110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.130 qpair failed and we were unable to recover it. 00:37:18.130 [2024-07-13 22:20:37.198275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.198309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.198510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.198544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.198727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.198761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.198924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.198959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.199123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.199157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.199316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.199349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.199508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.199541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.199762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.199815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.200004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.200040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.200236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.200270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.200425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.200458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.200640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.200673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.200863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.200903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.201068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.201101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.201268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.201301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.201454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.201500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.201686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.201718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.201881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.201913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.202075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.202107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.202264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.202296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.202472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.202509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.202663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.202695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.202893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.202927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.203121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.203154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.203320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.203353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.203538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.203570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.203735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.203766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.203926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.203959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.204123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.204156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.204317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.204348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.204521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.204552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.204746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.204779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.204950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.204983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.205174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.205207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.205398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.205431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.205620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.205652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.205842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.205881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.206068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.206099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.206293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.206325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.206510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.206543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.131 [2024-07-13 22:20:37.206719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.131 [2024-07-13 22:20:37.206752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.131 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.206921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.206955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.207112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.207144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.207357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.207390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.207557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.207590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.207797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.207830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.208015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.208046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.208224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.208255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.208422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.208455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.208662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.208695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.208918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.208951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.209135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.209167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.209327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.209360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.209563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.209596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.209757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.209791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.209956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.209989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.210155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.210194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.210353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.210386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.210551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.210583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.210763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.210795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.210995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.211032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.211243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.211275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.211440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.211472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.211657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.211688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.211878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.211911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.212071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.212105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.212264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.212297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.212479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.212511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.212695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.212727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.212891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.212925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.213084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.213116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.213264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.213296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.213513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.213545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.213710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.213741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.213962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.213995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.214211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.214243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.214409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.214442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.214607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.214641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.214802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.132 [2024-07-13 22:20:37.214834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.132 qpair failed and we were unable to recover it. 00:37:18.132 [2024-07-13 22:20:37.215032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.215074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.215290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.215322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.215505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.215537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.215697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.215729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.215889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.215922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.216110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.216143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.216329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.216361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.216573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.216604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.216771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.216804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.217002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.217034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.217188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.217221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.217425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.217456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.217612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.217644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.217846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.217883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.218041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.218073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.218225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.218257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.218448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.218481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.218633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.218665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.218844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.218884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.219043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.219076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.219229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.219262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.219423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.219459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.219633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.219665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.219872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.219917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.220075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.220108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.220265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.220297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.220506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.220539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.220703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.220734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.220904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.220938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.221112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.221145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.221347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.221380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.221547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.221580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.133 [2024-07-13 22:20:37.221788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.133 [2024-07-13 22:20:37.221820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.133 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.222009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.222041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.222228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.222260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.222420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.222453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.222610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.222643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.222801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.222833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.223016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.223048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.223226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.223260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.223468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.223501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.223706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.223738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.223932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.223965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.224133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.224166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.224326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.224358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.224542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.224575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.224756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.224787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.224975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.225008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.225169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.225201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.225363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.225396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.225557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.225589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.225771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.225803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.225963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.225995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.226156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.226188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.226375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.226409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.226571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.226604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.226765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.226796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.226976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.227010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.227174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.227207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.227363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.227396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.227555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.227588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.227748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.227786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.227962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.227995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.228165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.228197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.228384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.228427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.228589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.228620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.228786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.228818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.229021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.229054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.229211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.229244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.229467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.229499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.229673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.229705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.229903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.229936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.230100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.134 [2024-07-13 22:20:37.230132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.134 qpair failed and we were unable to recover it. 00:37:18.134 [2024-07-13 22:20:37.230322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.230354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.230521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.230555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.230752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.230785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.230968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.231002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.231161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.231194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.231379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.231411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.231591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.231622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.231814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.231848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.232032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.232065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.232232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.232264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.232415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.232447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.232606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.232638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.232788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.232820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.233007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.233039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.233227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.233258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.233439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.233471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.233658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.233691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.233878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.233919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.234091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.234124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.234316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.234350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.234499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.234531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.234774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.234807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.234971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.235004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.235188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.235221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.235406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.235438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.235627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.235658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.235837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.235875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.236057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.236088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.236280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.236318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.236514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.236546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.236737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.236769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.236973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.237006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.237176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.237215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.237409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.237441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.237626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.237658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.237841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.237877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.238046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.238078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.238238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.238269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.238440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.238472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.238690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.238722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.135 [2024-07-13 22:20:37.238884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.135 [2024-07-13 22:20:37.238918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.135 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.239113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.239145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.239328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.239359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.239565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.239597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.239750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.239782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.239947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.239980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.240146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.240188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.240343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.240375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.240635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.240668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.240847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.240887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.241051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.241083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.241288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.241319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.241480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.241511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.241704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.241736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.241936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.241969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.242154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.242197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.242369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.242402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.242591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.242622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.242772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.242803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.242992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.243024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.243202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.243234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.243420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.243452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.243608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.243640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.243816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.243847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.244079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.244112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.244324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.244368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.244540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.244573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.244751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.244783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.244954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.244992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.245183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.245230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.245412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.245445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.245641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.245672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.245851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.245901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.246111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.246145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.246344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.246389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.246550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.246583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.246767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.246799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.246965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.246998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.247184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.247215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.247380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.247412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.247584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.136 [2024-07-13 22:20:37.247617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.136 qpair failed and we were unable to recover it. 00:37:18.136 [2024-07-13 22:20:37.247777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.247808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.248015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.248049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.248231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.248263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.248469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.248502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.248684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.248716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.248901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.248941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.249097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.249130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.249293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.249325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.249482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.249514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.249671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.249704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.249892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.249932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.250095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.250127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.250344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.250376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.250572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.250604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.250766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.250798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.250986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.251019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.251191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.251223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.251398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.251430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.251644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.251675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.251861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.251899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.252064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.252096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.252299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.252332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.252508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.252541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.252720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.252751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.252944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.252977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.253150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.253187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.253367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.253399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.253584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.253621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.253826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.253857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.254035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.254066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.254312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.254344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.254494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.254527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.254709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.254740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.254929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.254961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.255137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.255169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.255354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.255386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.255558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.255590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.255753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.255785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.255992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.256036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.256203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.256235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.256418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.137 [2024-07-13 22:20:37.256450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.137 qpair failed and we were unable to recover it. 00:37:18.137 [2024-07-13 22:20:37.256621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.256653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.256811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.256844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.257019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.257051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.257213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.257245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.257432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.257464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.257654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.257686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.257903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.257940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.258128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.258159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.258343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.258375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.258529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.258562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.258748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.258780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.258933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.258966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.259129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.259161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.259366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.259398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.259608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.259639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.259840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.259879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.260071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.260103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.260328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.260360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.260539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.260571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.260736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.260768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.260931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.260963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.261154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.261194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.261350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.261381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.261538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.261570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.261727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.261758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.261946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.261979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.262134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.262171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.262330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.262363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.262526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.262558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.262723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.262755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.262949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.262981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.263148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.263183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.263367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.263399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.263554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.263586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.263774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.138 [2024-07-13 22:20:37.263805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.138 qpair failed and we were unable to recover it. 00:37:18.138 [2024-07-13 22:20:37.263997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.264029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.264207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.264240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.264403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.264435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.264647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.264680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.264869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.264902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.265077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.265109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.265309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.265341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.265500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.265533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.265691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.265724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.265895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.265931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.266106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.266137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.266343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.266375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.266540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.266573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.266731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.266763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.266925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.266958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.267141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.267172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.267356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.267387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.267582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.267613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.267785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.267817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.268000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.268033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.268194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.268226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.268387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.268420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.268608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.268640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.268822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.268855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.269048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.269080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.269265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.269298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.269486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.269530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.269682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.269714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.269887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.269927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.270133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.270166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.270336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.270368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.270534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.270571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.270734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.270766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.270941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.270973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.271138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.271171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.271343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.271376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.271555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.271587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.271776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.271808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.272018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.272050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.272215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.272246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.272427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.139 [2024-07-13 22:20:37.272460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.139 qpair failed and we were unable to recover it. 00:37:18.139 [2024-07-13 22:20:37.272615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.272648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.272848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.272886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.273108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.273140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.273322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.273354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.273534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.273565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.273728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.273760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.273945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.273978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.274159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.274192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.274352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.274383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.274572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.274604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.274766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.274798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.274959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.274992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.275152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.275187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.275344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.275375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.275575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.275608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.275795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.275827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.275995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.276029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.276231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.276263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.276443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.276474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.276659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.276691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.276859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.276896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.277056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.277089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.277279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.277311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.277476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.277508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.277722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.277753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.277937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.277969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.278180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.278212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.278401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.278434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.278604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.278636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.278789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.278820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.279036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.279069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.279238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.279271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.279440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.279476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.279664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.279696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.279860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.279909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.280095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.280128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.280315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.280346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.280531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.280562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.280722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.140 [2024-07-13 22:20:37.280755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.140 qpair failed and we were unable to recover it. 00:37:18.140 [2024-07-13 22:20:37.280947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.280979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.281163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.281195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.281385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.281417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.281573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.281604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.281786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.281818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.282026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.282059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.282229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.282261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.282471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.282503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.282663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.282696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.282870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.282902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.283103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.283147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.283359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.283391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.283560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.283592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.283806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.283838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.284033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.284065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.284257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.284289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.284483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.284515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.284683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.284716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.284871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.284914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.285100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.285131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.285300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.285332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.285499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.285530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.285742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.285774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.285964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.285997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.286158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.286191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.286388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.286420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.286616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.286648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.286891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.286931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.287104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.287138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.287325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.287357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.287512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.287544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.287731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.287762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.287932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.287965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.288125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.288157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.288331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.288363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.288575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.288607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.288792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.288824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.289064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.289096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.289254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.289286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.289463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.289494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.141 qpair failed and we were unable to recover it. 00:37:18.141 [2024-07-13 22:20:37.289702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.141 [2024-07-13 22:20:37.289734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.289955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.289988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.290160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.290193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.290343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.290375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.290543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.290576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.290738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.290770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.290959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.290992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.291155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.291187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.291361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.291392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.291582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.291615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.291899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.291932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.292148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.292181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.292368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.292400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.292564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.292595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.292796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.292829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.293017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.293050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.293210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.293242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.293395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.293426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.293616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.293653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.293817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.293849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.294019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.294051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.294216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.294248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.294434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.294466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.294669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.294701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.294891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.294924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.295079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.295111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.295298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.295330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.295486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.295517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.295679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.295710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.295879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.295911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.296079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.296112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.296280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.296312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.296471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.296502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.296666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.296699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.296851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.142 [2024-07-13 22:20:37.296899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.142 qpair failed and we were unable to recover it. 00:37:18.142 [2024-07-13 22:20:37.297089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.297121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.297304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.297336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.297504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.297536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.297701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.297734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.297905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.297937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.298126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.298160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.298320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.298352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.298535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.298567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.298766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.298798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.298988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.299021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.299203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.299235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.299410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.299442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.299624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.299657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.299809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.299841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.300027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.300059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.300251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.300283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.300468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.300500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.300674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.300707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.300887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.300920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.301077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.301110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.301323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.301355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.301550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.301582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.301764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.301796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.301990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.302027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.302218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.302250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.302415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.302447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.302629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.302661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.302817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.302849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.303021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.303054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.303218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.303251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.303412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.303444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.303708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.303741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.303957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.303991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.304257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.304289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.304463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.304495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.304683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.304715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.304889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.304921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.305108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.305140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.305314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.305348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.305529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.305562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.143 [2024-07-13 22:20:37.305739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.143 [2024-07-13 22:20:37.305772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.143 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.305976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.306010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.306201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.306234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.306417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.306449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.306605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.306637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.306825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.306857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.307047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.307079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.307341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.307373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.307557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.307590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.307749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.307781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.307945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.307978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.308160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.308193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.308354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.308386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.308568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.308600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.308789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.308821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.308990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.309023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.309177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.309210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.309367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.309399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.309560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.309592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.309789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.309822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.309985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.310018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.310179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.310213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.310401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.310434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.310587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.310635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.310801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.310833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.311021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.311054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.311257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.311289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.311459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.311492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.311652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.311684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.311849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.311890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.312053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.312087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.312268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.312301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.312490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.312522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.312679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.312710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.312880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.312913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.313077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.313111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.313308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.313340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.313509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.313542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.144 [2024-07-13 22:20:37.313699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.144 [2024-07-13 22:20:37.313730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.144 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.313916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.313949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.314140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.314173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.314396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.314430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.314620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.314652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.314837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.314874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.315064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.315097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.315259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.315290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.315449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.315481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.315656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.315688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.315855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.315915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.316106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.316138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.316327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.316359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.316562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.316593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.316809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.316841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.317008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.317040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.317203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.317235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.317399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.317431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.317594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.317626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.317837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.317875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.318049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.318085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.318251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.318284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.318441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.318474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.318647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.318678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.318864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.318902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.319098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.319136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.319326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.319359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.319559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.319592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.319754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.319787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.319997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.320030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.320192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.320224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.320390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.320422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.320577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.320610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.320795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.320828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.321004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.321036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.321196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.321227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.321413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.321445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.321637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.321669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.321846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.321883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.322083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.145 [2024-07-13 22:20:37.322115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.145 qpair failed and we were unable to recover it. 00:37:18.145 [2024-07-13 22:20:37.322339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.322371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.322575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.322606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.322773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.322804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.322988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.323020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.323238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.323271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.323461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.323494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.323699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.323734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.323919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.323972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.324139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.324171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.324385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.324431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.324615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.324647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.324834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.324870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.325042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.325075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.325241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.325273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.325454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.325486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.325648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.325681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.325844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.325880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.326057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.326090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.326277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.326310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.326508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.326541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.326725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.326758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.326942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.326974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.327143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.327175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.327340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.327371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.327547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.327580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.327772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.327809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.328068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.328101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.328325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.328358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.328520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.328553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.328723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.328755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.328924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.328957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.329123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.329155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.329321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.329353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.329536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.329568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.146 qpair failed and we were unable to recover it. 00:37:18.146 [2024-07-13 22:20:37.329755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.146 [2024-07-13 22:20:37.329786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.329944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.329977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.330146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.330178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.330354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.330386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.330562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.330595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.330807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.330839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.331014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.331045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.331214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.331246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.331420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.331452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.331663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.331695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.331891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.331924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.332088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.332122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.332317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.332348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.332564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.332596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.332789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.332822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.333005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.333038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.333212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.333245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.333427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.333460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.333663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.333695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.333881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.333914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.334097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.334128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.334283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.334315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.334515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.334548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.334721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.334754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.334940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.334973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.335146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.335178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.335332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.335364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.335553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.335586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.335740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.335772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.335971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.336004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.336165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.336198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.336354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.336390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.336582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.336614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.336782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.336813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.337004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.337036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.337203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.337236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.337441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.337474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.337635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.337667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.337822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.337854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.338045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.338089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.338254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.147 [2024-07-13 22:20:37.338286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.147 qpair failed and we were unable to recover it. 00:37:18.147 [2024-07-13 22:20:37.338466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.338498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.338689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.338721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.338881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.338914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.339126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.339159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.339376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.339408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.339588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.339620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.339774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.339806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.339980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.340013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.340181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.340213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.340405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.340438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.340624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.340655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.340887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.340920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.341086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.341118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.341302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.341334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.341487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.341519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.341724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.341755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.341945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.341978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.342139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.342172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.342337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.342370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.342559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.342591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.342780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.342812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.343004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.343037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.343225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.343258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.343422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.343455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.343612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.343645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.343809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.343846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.344014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.344046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.344199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.344232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.344392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.344424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.344607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.344640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.344826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.344862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.345068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.345100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.345288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.345319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.345499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.345534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.345705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.345737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.345930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.345964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.346154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.346186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.346346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.346384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.346567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.346599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.346770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.148 [2024-07-13 22:20:37.346802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.148 qpair failed and we were unable to recover it. 00:37:18.148 [2024-07-13 22:20:37.347003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.347036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.347199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.347231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.347423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.347455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.347643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.347676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.347861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.347898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.348090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.348121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.348309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.348342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.348503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.348535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.348721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.348754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.348934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.348968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.349123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.349157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.349336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.349368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.349574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.349607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.349774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.349806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.349964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.349996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.350209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.350241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.350427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.350460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.350661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.350693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.350850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.350889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.351113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.351147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.351331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.351363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.351548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.351579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.351755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.351801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.351965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.351998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.352179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.352211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.352372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.352404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.352602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.352634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.352813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.352844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.353033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.353066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.353257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.353289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.353439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.353475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.353630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.353662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.353852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.353888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.354059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.354092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.354277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.354309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.354507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.354540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.354721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.354754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.354940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.354973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.355175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.355207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.355417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.355448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.355628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.149 [2024-07-13 22:20:37.355660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.149 qpair failed and we were unable to recover it. 00:37:18.149 [2024-07-13 22:20:37.355862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.355909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.356101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.356133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.356293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.356326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.356525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.356558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.356751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.356783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.356970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.357003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.357163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.357194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.357351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.357385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.357569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.357601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.357759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.357790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.357987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.358020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.358180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.358212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.358369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.358401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.358616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.358649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.358816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.358847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.359050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.359081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.359285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.359317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.359508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.359540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.359724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.359756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.359915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.359948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.360140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.360173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.360336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.360368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.360522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.360553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.360715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.360747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.360911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.360942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.361129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.361160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.361317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.361351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.361554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.361587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.361770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.361802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.361990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.362028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.362185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.362217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.362400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.362432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.362611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.362643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.362807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.362839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.363030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.363063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.363223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.363255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.363461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.150 [2024-07-13 22:20:37.363492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.150 qpair failed and we were unable to recover it. 00:37:18.150 [2024-07-13 22:20:37.363677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.151 [2024-07-13 22:20:37.363708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.151 qpair failed and we were unable to recover it. 00:37:18.151 [2024-07-13 22:20:37.363876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.151 [2024-07-13 22:20:37.363908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.151 qpair failed and we were unable to recover it. 00:37:18.151 [2024-07-13 22:20:37.364065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.151 [2024-07-13 22:20:37.364097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.151 qpair failed and we were unable to recover it. 00:37:18.151 [2024-07-13 22:20:37.364295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.151 [2024-07-13 22:20:37.364328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.151 qpair failed and we were unable to recover it. 00:37:18.151 [2024-07-13 22:20:37.364530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.151 [2024-07-13 22:20:37.364562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.151 qpair failed and we were unable to recover it. 00:37:18.151 [2024-07-13 22:20:37.364723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.151 [2024-07-13 22:20:37.364756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.151 qpair failed and we were unable to recover it. 00:37:18.151 [2024-07-13 22:20:37.364950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.151 [2024-07-13 22:20:37.364982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.151 qpair failed and we were unable to recover it. 00:37:18.151 [2024-07-13 22:20:37.365146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.151 [2024-07-13 22:20:37.365177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.151 qpair failed and we were unable to recover it. 00:37:18.151 [2024-07-13 22:20:37.365361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.151 [2024-07-13 22:20:37.365404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.151 qpair failed and we were unable to recover it. 00:37:18.151 [2024-07-13 22:20:37.365565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.151 [2024-07-13 22:20:37.365597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.151 qpair failed and we were unable to recover it. 00:37:18.151 [2024-07-13 22:20:37.365765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.365798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.365960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.365992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.366161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.366193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.366360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.366394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.366569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.366601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.366777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.366809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.366983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.367015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.367173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.367205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.367392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.367425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.367610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.367642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.367819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.367851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.368024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.368057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.368262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.368294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.368466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.368498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.368697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.368730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.368909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.368942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.369114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.369146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.369309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.369342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.369496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.369528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.369695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.369726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.369916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.369948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.370112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.370145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.370341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.370377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.370543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.370575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.370760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.370792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.370981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.371014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.371202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.371235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.371404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.371438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.371598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.371631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.371806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.371838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.372004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.372037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.372260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.372292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.372459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.372492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.372650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.372682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.372846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.372884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.373048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.373079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.373243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.373276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.152 [2024-07-13 22:20:37.373469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.152 [2024-07-13 22:20:37.373509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.152 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.373667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.373700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.373854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.373902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.374062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.374095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.374257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.374290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.374441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.374474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.374662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.374693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.374894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.374927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.375086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.375118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.375326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.375358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.375518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.375550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.375710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.375742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.375925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.375957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.376127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.376159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.376351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.376383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.376550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.376582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.376743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.376774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.376965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.376997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.377192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.377225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.377386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.377418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.377620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.377651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.377807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.377839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.378026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.378058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.378219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.378251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.378472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.378504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.378687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.378735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.378912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.378945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.379107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.379140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.379309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.379341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.379527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.379558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.379711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.379742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.379899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.379932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.380098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.380130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.380284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.380316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.380507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.380540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.380717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.380749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.380917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.153 [2024-07-13 22:20:37.380950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.153 qpair failed and we were unable to recover it. 00:37:18.153 [2024-07-13 22:20:37.381170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.381202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.381376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.381408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.381601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.381633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.381799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.381831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.382018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.382050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.382231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.382264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.382500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.382533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.382694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.382725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.382883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.382915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.383105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.383136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.383297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.383329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.383528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.383560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.383766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.383799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.383954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.383986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.384176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.384207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.384374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.384406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.384565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.384597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.384780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.384812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.384998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.385030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.385193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.385226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.385408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.385440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.385630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.385663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.385847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.385884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.386056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.386087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.386264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.386295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.386505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.386537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.386686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.386718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.386930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.386964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.387125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.387158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.387333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.387364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.387547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.387578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.387742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.387774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.387948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.387981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.388178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.388210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.388380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.388412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.388592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.388624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.388779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.388811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.389024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.389057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.389217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.389248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.389404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.389435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.154 [2024-07-13 22:20:37.389646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.154 [2024-07-13 22:20:37.389678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.154 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.389843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.389882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.390045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.390077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.390261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.390292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.390463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.390495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.390695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.390727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.390885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.390919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.391106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.391139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.391329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.391362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.391548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.391580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.391763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.391794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.391995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.392028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.392184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.392246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.392431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.392463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.392646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.392678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.392836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.392886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.393044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.393076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.393259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.393291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.393443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.393475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.393663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.393695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.393900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.393933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.394120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.394152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.394309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.394340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.394551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.394583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.394746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.394778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.394961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.394994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.395183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.395215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.395408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.395440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.395598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.395630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.395826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.395858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.396055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.396087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.396248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.396279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.396435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.396468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.396677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.396709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.396872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.396905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.397114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.397146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.397319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.397351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.397557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.397589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.397756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.397788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.397947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.397980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.398167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.398199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.155 [2024-07-13 22:20:37.398354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.155 [2024-07-13 22:20:37.398385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.155 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.398570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.398601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.398774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.398806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.398990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.399023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.399178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.399210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.399403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.399435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.399607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.399638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.399848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.399887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.400046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.400078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.400290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.400327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.400508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.400541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.400762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.400794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.400961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.400994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.401150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.401183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.401369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.401405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.401574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.401606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.401768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.401801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.401981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.402014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.402193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.402225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.402378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.402409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.402583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.402615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.402775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.402807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.403003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.403035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.403194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.403226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.403387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.403419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.403587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.403619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.403798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.403830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.404024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.404058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.404254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.404287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.404497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.404529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.404715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.404748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.404914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.404946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.405134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.405165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.405318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.405350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.405511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.156 [2024-07-13 22:20:37.405544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.156 qpair failed and we were unable to recover it. 00:37:18.156 [2024-07-13 22:20:37.405734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.405777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.405950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.405982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.406171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.406202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.406360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.406391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.406541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.406572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.406756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.406788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.406960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.406992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.407180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.407212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.407371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.407403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.407581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.407614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.407776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.407809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.407978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.408010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.408176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.408209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.408402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.408434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.408589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.408622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.408832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.408864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.409035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.409067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.409267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.409299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.409496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.409529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.409693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.409729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.409919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.409953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.410141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.410174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.410337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.410370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.410562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.410595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.410768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.410800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.410992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.411025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.411207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.411240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.411411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.411444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.411659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.411692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.411880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.411913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.412071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.412103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.412258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.412290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.412501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.412534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.412723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.412755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.412923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.412956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.413129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.413161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.413329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.413361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.413575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.413608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.413799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.413832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.414036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.414069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.414222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.157 [2024-07-13 22:20:37.414254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.157 qpair failed and we were unable to recover it. 00:37:18.157 [2024-07-13 22:20:37.414417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.414449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.414616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.414648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.414817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.414850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.415020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.415052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.415211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.415243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.415434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.415466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.415656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.415690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.415850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.415890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.416045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.416078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.416269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.416301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.416463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.416496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.416659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.416691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.416879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.416912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.417069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.417101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.417281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.417314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.417469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.417502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.417653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.417686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.417839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.417877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.418045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.418082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.418270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.418303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.418452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.418484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.418667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.418700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.418889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.418921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.419112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.419156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.419337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.419369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.419595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.419627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.419812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.419844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.420017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.420050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.420234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.420267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.420452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.420484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.420666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.420698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.420849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.420902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.421067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.421100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.421285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.421317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.421485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.421519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.421705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.421738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.421895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.421929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.422110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.422142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.422337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.422370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.422531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.422563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.158 [2024-07-13 22:20:37.422730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.158 [2024-07-13 22:20:37.422762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.158 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.422935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.422968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.423122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.423154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.423338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.423371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.423532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.423563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.423724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.423757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.423935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.423968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.424139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.424172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.424359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.424391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.424577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.424609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.424798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.424830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.424993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.425026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.425218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.425251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.425431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.425464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.425655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.425688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.425847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.425885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.426060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.426092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.426253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.426286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.426470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.426506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.426678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.426710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.426898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.426930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.427099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.427158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.427361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.427395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.427586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.427619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.427805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.427838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.428017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.428049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.428241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.428273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.428435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.428468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.428650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.428682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.428877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.428910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.429092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.429124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.429310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.429343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.429509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.429541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.429762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.429794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.429959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.429992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.430149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.430183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.430381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.430413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.430596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.430628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.430848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.430887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.431047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.431079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.431284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.431317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.159 [2024-07-13 22:20:37.431482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.159 [2024-07-13 22:20:37.431514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.159 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.431701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.431733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.431934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.431966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.432126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.432158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.432321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.432352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.432518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.432550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.432732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.432775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.432968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.433001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.433222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.433254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.433404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.433436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.433629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.433661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.433815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.433847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.434018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.434051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.434233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.434264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.434452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.434484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.434665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.434697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.434858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.434895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.435049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.435084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.435245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.435277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.435466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.435498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.435658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.435691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.435903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.435936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.436103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.436135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.436295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.436328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.436518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.436551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.436733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.436764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.436975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.437008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.437166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.437198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.437356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.437389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.437691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.437723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.437911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.437944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.438136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.438168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.438327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.438359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.438517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.438549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.438730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.438763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.160 [2024-07-13 22:20:37.438954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.160 [2024-07-13 22:20:37.438986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.160 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.439151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.439183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.439379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.439412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.439590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.439623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.439803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.439836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.440027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.440060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.440247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.440279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.440438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.440470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.440634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.440667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.440827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.440860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.441043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.441077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.441244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.441277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.441436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.441468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.441642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.441674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.441855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.441916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.442072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.442104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.442288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.442321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.442508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.442540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.442732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.442764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.442941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.442975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.443135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.443169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.443350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.443383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.443567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.443606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.443772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.443804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.444002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.444035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.444225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.444258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.444436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.444469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.444627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.444660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.444848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.444886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.445040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.445072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.445262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.445295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.445482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.445515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.445696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.445728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.445916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.445949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.446131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.446163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.446322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.446365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.446566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.446598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.446789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.446822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.446996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.447030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.447182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.447214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.447371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.447404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.161 [2024-07-13 22:20:37.447581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.161 [2024-07-13 22:20:37.447614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.161 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.447800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.447832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.448002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.448036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.448204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.448237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.448421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.448453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.448609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.448641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.448797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.448830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.449000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.449033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.449201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.449233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.449395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.449428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.449612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.449644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.449830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.449862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.450042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.450074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.450254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.450287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.450459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.450493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.450641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.450674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.450835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.450874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.451061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.451093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.451307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.451339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.451496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.451527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.451716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.451748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.451913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.451951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.452132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.452164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.452316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.452347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.452508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.452539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.452695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.452727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.452894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.452927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.453087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.453120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.453283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.453314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.453495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.453526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.453711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.453744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.453900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.453932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.454090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.454123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.454279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.454310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.454491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.454523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.454687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.454720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.454930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.454963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.455146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.455179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.455337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.455369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.455548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.455579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.455771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.455803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.162 qpair failed and we were unable to recover it. 00:37:18.162 [2024-07-13 22:20:37.455989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.162 [2024-07-13 22:20:37.456022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.456202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.456234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.456396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.456428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.456625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.456657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.456812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.456845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.457012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.457045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.457229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.457261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.457451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.457483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.457664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.457695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.457879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.457912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.458093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.458125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.458311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.458343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.458502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.458534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.458717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.458748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.458909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.458941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.459100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.459134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.459323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.459355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.459510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.459553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.459720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.459752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.459936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.459969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.460128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.460165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.460322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.460355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.460540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.460571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.460724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.460755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.460931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.460963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.461132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.461166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.461341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.461373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.461530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.461561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.461724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.461755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.461914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.461946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.462145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.462177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.462342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.462374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.462535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.462566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.462723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.462755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.462923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.462956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.463148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.463180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.463370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.463401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.463566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.463597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.463772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.463804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.463962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.463995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.464207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.163 [2024-07-13 22:20:37.464239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.163 qpair failed and we were unable to recover it. 00:37:18.163 [2024-07-13 22:20:37.464410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.464441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.464602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.464633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.464840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.464880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.465041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.465073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.465259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.465291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.465475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.465507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.465722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.465754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.465933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.465965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.466177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.466208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.466379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.466411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.466630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.466663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.466841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.466880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.467042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.467074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.467231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.467263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.467437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.467469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.467647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.467680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.467842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.467882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.468043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.468074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.468260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.468292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.468470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.468506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.468687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.468719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.468899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.468932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.469114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.469146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.469332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.469364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.469535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.469567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.469743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.469776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.469951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.469984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.470165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.470198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.470392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.470424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.470609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.470642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.470831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.470864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.471059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.471092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.471277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.471309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.471511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.471543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.471728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.164 [2024-07-13 22:20:37.471760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.164 qpair failed and we were unable to recover it. 00:37:18.164 [2024-07-13 22:20:37.471931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.471964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.472151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.472184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.472339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.472371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.472589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.472621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.472807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.472839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.473049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.473092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.473279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.473312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.473501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.473533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.473739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.473771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.473927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.473960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.474114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.474147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.474311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.474344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.474494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.474526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.474707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.474739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.474900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.474933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.475134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.475166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.475338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.475370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.475554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.475586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.475775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.475806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.476004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.476038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.476196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.476228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.476389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.476421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.476582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.476614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.476777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.476809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.476976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.477012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.477218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.477250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.477437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.477469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.477655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.477687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.477852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.477892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.478051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.478083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.478243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.478275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.478436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.478468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.478649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.478682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.478863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.478902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.479080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.479112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.479296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.479328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.479536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.479569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.479754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.479786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.479959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.479992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.480183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.480216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.480380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.165 [2024-07-13 22:20:37.480413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.165 qpair failed and we were unable to recover it. 00:37:18.165 [2024-07-13 22:20:37.480597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.480628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.480811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.480843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.481017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.481051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.481240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.481273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.481447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.481480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.481666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.481699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.481877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.481910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.482064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.482096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.482261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.482295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.482457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.482489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.482658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.482691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.482873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.482906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.483073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.483105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.483299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.483331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.483482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.483514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.483675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.483708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.483922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.483956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.484132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.484165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.484325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.484357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.484545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.484576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.484740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.484773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.484929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.484963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.485125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.485157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.485311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.485347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.485514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.485547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.485703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.485735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.485906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.485938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.486099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.486132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.486321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.486364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.486523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.486557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.486720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.486753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.486909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.486941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.487104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.487136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.487330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.487362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.487530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.487562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.487717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.487749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.487908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.487942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.488110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.488142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.488327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.488359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.488524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.488556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.488744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.166 [2024-07-13 22:20:37.488776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.166 qpair failed and we were unable to recover it. 00:37:18.166 [2024-07-13 22:20:37.488957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.488990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.489175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.489207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.489374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.489406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.489598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.489631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.489814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.489858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.490039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.490072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.490268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.490299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.490466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.490498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.490665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.490696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.490892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.490926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.491088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.491122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.491313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.491345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.491516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.491548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.491711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.491742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.491941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.491974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.492133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.492173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.492360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.492393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.492558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.492590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.492776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.492808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.493000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.493032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.493226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.493257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.493459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.493491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.493679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.493715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.493903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.493936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.494126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.494159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.494343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.494375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.494541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.494573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.494785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.494817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.495000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.495034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.495232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.495264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.495447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.495479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.495644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.495677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.495862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.495901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.496067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.496098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.496288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.496319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.496482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.496514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.496700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.496733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.496900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.496933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.497117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.497149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.497371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.497413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.167 [2024-07-13 22:20:37.497614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.167 [2024-07-13 22:20:37.497648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.167 qpair failed and we were unable to recover it. 00:37:18.168 [2024-07-13 22:20:37.497845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.168 [2024-07-13 22:20:37.497893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.168 qpair failed and we were unable to recover it. 00:37:18.168 [2024-07-13 22:20:37.498062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.168 [2024-07-13 22:20:37.498103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.168 qpair failed and we were unable to recover it. 00:37:18.168 [2024-07-13 22:20:37.498271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.168 [2024-07-13 22:20:37.498304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.168 qpair failed and we were unable to recover it. 00:37:18.168 [2024-07-13 22:20:37.498489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.168 [2024-07-13 22:20:37.498522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.168 qpair failed and we were unable to recover it. 00:37:18.168 [2024-07-13 22:20:37.498711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.168 [2024-07-13 22:20:37.498749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.168 qpair failed and we were unable to recover it. 00:37:18.168 [2024-07-13 22:20:37.498982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.168 [2024-07-13 22:20:37.499016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.168 qpair failed and we were unable to recover it. 00:37:18.168 [2024-07-13 22:20:37.499197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.168 [2024-07-13 22:20:37.499229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.168 qpair failed and we were unable to recover it. 00:37:18.168 [2024-07-13 22:20:37.499414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.168 [2024-07-13 22:20:37.499447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.168 qpair failed and we were unable to recover it. 00:37:18.168 [2024-07-13 22:20:37.499622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.168 [2024-07-13 22:20:37.499658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.168 qpair failed and we were unable to recover it. 00:37:18.168 [2024-07-13 22:20:37.499862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.168 [2024-07-13 22:20:37.499906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.168 qpair failed and we were unable to recover it. 00:37:18.168 [2024-07-13 22:20:37.500094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.168 [2024-07-13 22:20:37.500139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.168 qpair failed and we were unable to recover it. 00:37:18.168 [2024-07-13 22:20:37.500338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.168 [2024-07-13 22:20:37.500380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.168 qpair failed and we were unable to recover it. 00:37:18.168 [2024-07-13 22:20:37.500550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.168 [2024-07-13 22:20:37.500582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.168 qpair failed and we were unable to recover it. 00:37:18.168 [2024-07-13 22:20:37.500767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.168 [2024-07-13 22:20:37.500818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.168 qpair failed and we were unable to recover it. 00:37:18.168 [2024-07-13 22:20:37.501007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.168 [2024-07-13 22:20:37.501040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.168 qpair failed and we were unable to recover it. 00:37:18.168 [2024-07-13 22:20:37.501208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.168 [2024-07-13 22:20:37.501241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.168 qpair failed and we were unable to recover it. 00:37:18.168 [2024-07-13 22:20:37.501443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.168 [2024-07-13 22:20:37.501476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.168 qpair failed and we were unable to recover it. 00:37:18.168 [2024-07-13 22:20:37.501634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.168 [2024-07-13 22:20:37.501678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.168 qpair failed and we were unable to recover it. 00:37:18.168 [2024-07-13 22:20:37.501872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.168 [2024-07-13 22:20:37.501918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.168 qpair failed and we were unable to recover it. 00:37:18.168 [2024-07-13 22:20:37.502100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.168 [2024-07-13 22:20:37.502133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.168 qpair failed and we were unable to recover it. 00:37:18.444 [2024-07-13 22:20:37.502329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.444 [2024-07-13 22:20:37.502362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.444 qpair failed and we were unable to recover it. 00:37:18.444 [2024-07-13 22:20:37.502560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.444 [2024-07-13 22:20:37.502593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.444 qpair failed and we were unable to recover it. 00:37:18.444 [2024-07-13 22:20:37.502787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.444 [2024-07-13 22:20:37.502819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.444 qpair failed and we were unable to recover it. 00:37:18.444 [2024-07-13 22:20:37.502995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.444 [2024-07-13 22:20:37.503028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.444 qpair failed and we were unable to recover it. 00:37:18.444 [2024-07-13 22:20:37.503187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.503219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.503405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.503452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.503652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.503685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.503858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.503899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.504074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.504119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.504322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.504355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.504512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.504548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.504726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.504759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.504934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.504968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.505156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.505192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.505377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.505410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.505569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.505603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.505772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.505804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.505994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.506027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.506199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.506232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.506392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.506428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.506612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.506645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.506837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.506884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.507053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.507087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.507278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.507311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.507465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.507496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.507683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.507716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.507923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.507956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.508135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.508174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.508353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.508390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.508570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.508603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.508761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.508793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.508957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.508991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.509155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.509188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.509398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.509431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.509609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.509642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.509830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.509862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.510026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.510059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.510247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.510280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.510481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.510513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.510678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.510711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.510873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.510906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.511075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.511107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.511282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.511314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.511478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.445 [2024-07-13 22:20:37.511511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.445 qpair failed and we were unable to recover it. 00:37:18.445 [2024-07-13 22:20:37.511671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.511704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.511876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.511909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.512067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.512099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.512261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.512293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.512488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.512520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.512680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.512713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.512908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.512942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.513098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.513131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.513283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.513316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.513467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.513500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.513685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.513739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.513933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.513966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.514142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.514175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.514366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.514397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.514585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.514617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.514819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.514857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.515047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.515079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.515271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.515303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.515464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.515497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.515672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.515703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.515903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.515935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.516100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.516131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.516297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.516329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.516541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.516574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.516762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.516798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.516971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.517004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.517164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.517196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.517386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.517418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.517611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.517645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.517808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.517841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.518046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.518078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.518264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.518296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.518477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.518509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.518699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.518732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.518899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.518932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.519119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.519151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.519311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.519344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.519525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.519558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.519723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.519755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.519925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.519957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.520154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.446 [2024-07-13 22:20:37.520187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.446 qpair failed and we were unable to recover it. 00:37:18.446 [2024-07-13 22:20:37.520359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.520392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.520573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.520607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.520793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.520826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.521001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.521034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.521194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.521227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.521416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.521448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.521607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.521639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.521806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.521840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.522025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.522058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.522246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.522278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.522483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.522519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.522683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.522716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.522907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.522940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.523097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.523130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.523306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.523338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.523510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.523542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.523713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.523745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.523925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.523959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.524116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.524155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.524339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.524371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.524527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.524559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.524771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.524803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.524994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.525027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.525207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.525244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.525404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.525436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.525617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.525649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.525835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.525882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.526073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.526105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.526330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.526362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.526526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.526559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.526744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.526777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.526950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.526983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.527158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.527191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.527382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.527425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.527629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.527660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.527880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.527914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.528080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.528113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.528293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.528326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.528515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.528548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.528760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.528793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.447 [2024-07-13 22:20:37.528988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.447 [2024-07-13 22:20:37.529021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.447 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.529184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.529217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.529381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.529414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.529572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.529605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.529817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.529854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.530048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.530081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.530237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.530270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.530452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.530485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.530650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.530682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.530872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.530905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.531072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.531105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.531294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.531333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.531518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.531550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.531717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.531750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.531931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.531964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.532121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.532154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.532342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.532374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.532548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.532581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.532739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.532772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.532965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.532998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.533164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.533196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.533410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.533443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.533644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.533677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.533864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.533905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.534073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.534105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.534302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.534334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.534548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.534580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.534757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.534790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.534986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.535018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.535223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.535255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.535442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.535481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.535653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.535687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.535848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.535888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.536062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.536095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.536261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.536293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.536485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.536517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.536684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.536720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.536895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.536928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.537095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.537129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.537316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.537348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.537531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.537563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.448 qpair failed and we were unable to recover it. 00:37:18.448 [2024-07-13 22:20:37.537722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.448 [2024-07-13 22:20:37.537754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.537918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.537951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.538101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.538134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.538305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.538337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.538491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.538524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.538673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.538706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.538889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.538923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.539088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.539122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.539308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.539340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.539540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.539573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.539761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.539794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.539972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.540006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.540158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.540190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.540358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.540391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.540566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.540598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.540751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.540784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.540956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.540999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.541157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.541190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.541345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.541377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.541537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.541569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.541730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.541761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.541943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.541977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.542153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.542189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.542347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.542379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.542546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.542578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.542762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.542794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.542989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.543022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.543199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.543231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.543417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.543451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.543611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.543644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.543857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.543902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.544065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.544097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.544262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.544296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.544468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.449 [2024-07-13 22:20:37.544500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.449 qpair failed and we were unable to recover it. 00:37:18.449 [2024-07-13 22:20:37.544669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.544702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.544896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.544929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.545086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.545117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.545328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.545361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.545542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.545574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.545737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.545769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.545949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.545992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.546157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.546189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.546348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.546379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.546549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.546583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.546748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.546780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.546979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.547011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.547196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.547227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.547408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.547439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.547600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.547632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.547798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.547830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.548002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.548034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.548190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.548221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.548406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.548440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.548629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.548662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.548856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.548896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.549074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.549107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.549287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.549318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.549506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.549537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.549726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.549758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.549943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.549977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.550135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.550180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.550360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.550393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.550574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.550610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.550771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.550802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.550977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.551010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.551190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.551223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.551417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.551449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.551632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.551663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.551821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.551853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.552026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.552058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.552245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.552277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.552435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.552477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.552655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.552688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.552889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.552921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.450 [2024-07-13 22:20:37.553106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.450 [2024-07-13 22:20:37.553137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.450 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.553319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.553351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.553507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.553540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.553732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.553765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.553942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.553976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.554162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.554195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.554360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.554403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.554591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.554623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.554784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.554817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.555021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.555054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.555212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.555245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.555435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.555468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.555656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.555689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.555844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.555891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.556065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.556097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.556298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.556351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.556573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.556609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.556792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.556826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.557048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.557084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.557256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.557302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.557490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.557523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.557701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.557734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.557902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.557937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.558116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.558161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.558321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.558355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.558546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.558581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.558744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.558777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.558947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.558981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.559169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.559207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.559363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.559396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.559608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.559642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.559806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.559840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.560031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.560064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.560216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.560249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.560452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.560486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.560676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.560709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.560894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.560928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.561088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.561121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.561318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.561352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.561521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.561556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.561727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.451 [2024-07-13 22:20:37.561760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.451 qpair failed and we were unable to recover it. 00:37:18.451 [2024-07-13 22:20:37.561912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.561947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.562139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.562181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.562342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.562377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.562565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.562599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.562785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.562819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.562996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.563029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.563199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.563239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.563430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.563464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.563634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.563667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.563861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.563900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.564119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.564163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.564319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.564352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.564563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.564597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.564804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.564837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.565032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.565065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.565254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.565288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.565480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.565514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.565690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.565723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.565885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.565919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.566086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.566119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.566344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.566378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.566550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.566583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.566756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.566789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.566966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.567001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.567188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.567223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.567378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.567411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.567589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.567623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.567797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.567835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.568011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.568045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.568256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.568289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.568475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.568509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.568693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.568726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.568900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.568933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.569123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.569156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.569359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.569392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.569566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.569599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.569766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.569800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.569996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.570030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.570216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.570249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.570436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.570469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.570643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.452 [2024-07-13 22:20:37.570676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.452 qpair failed and we were unable to recover it. 00:37:18.452 [2024-07-13 22:20:37.570904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.570938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.571101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.571145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.571345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.571378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.571566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.571600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.571787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.571822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.572029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.572064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.572285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.572318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.572472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.572506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.572715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.572748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.572947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.572981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.573138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.573181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.573362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.573395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.573554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.573587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.573791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.573824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.573988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150001f2280 is same with the state(5) to be set 00:37:18.453 [2024-07-13 22:20:37.574279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.574330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.574530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.574576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.574771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.574806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.575010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.575046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.575250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.575284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.575447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.575480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.575650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.575684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.575874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.575909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.576083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.576117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.576346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.576382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.576554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.576588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.576774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.576807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.576975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.577009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.577171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.577204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.577357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.577399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.577560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.577594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.577806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.577839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.578040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.578074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.578254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.578287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.578471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.578504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.453 qpair failed and we were unable to recover it. 00:37:18.453 [2024-07-13 22:20:37.578679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.453 [2024-07-13 22:20:37.578712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.578878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.578912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.579080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.579115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.579310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.579344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.579563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.579595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.579782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.579819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.579994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.580028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.580192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.580225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.580383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.580415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.580575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.580608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.580794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.580827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.581048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.581081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.581260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.581293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.581468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.581501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.581684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.581717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.581883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.581917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.582123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.582162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.582345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.582378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.582573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.582607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.582775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.582808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.582985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.583018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.583185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.583218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.583402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.583435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.583593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.583626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.583789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.583822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.584027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.584060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.584256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.584289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.584449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.584482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.584639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.584674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.584882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.584916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.585087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.585121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.585337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.585369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.585547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.585581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.585741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.585774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.585972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.586021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.586245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.586294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.586534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.586583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.586761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.586796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.586975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.587009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.587233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.587266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.587456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.454 [2024-07-13 22:20:37.587489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.454 qpair failed and we were unable to recover it. 00:37:18.454 [2024-07-13 22:20:37.587680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.587713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.587907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.587941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.588133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.588210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.588398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.588431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.588591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.588628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.588816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.588849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.589024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.589057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.589246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.589279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.589463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.589496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.589686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.589720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.589924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.589958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.590132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.590176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.590333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.590366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.590556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.590589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.590758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.590792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.590957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.590991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.591173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.591206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.591416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.591449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.591622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.591655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.591886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.591929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.592124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.592165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.592439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.592472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.592667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.592702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.592897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.592932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.593133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.593168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.593339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.593374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.593577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.593609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.597003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.597054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.597252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.597289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.597487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.597521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.597692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.597729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.597925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.597959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.598141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.598177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.598341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.598373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.598535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.598568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.598757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.598790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.598981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.599015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.599204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.599249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.599415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.599449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.599610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.599645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.599811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.455 [2024-07-13 22:20:37.599845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.455 qpair failed and we were unable to recover it. 00:37:18.455 [2024-07-13 22:20:37.600048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.600081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.600265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.600298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.600510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.600543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.600741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.600779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.600974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.601008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.601200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.601233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.601420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.601452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.601659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.601692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.601896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.601931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.602117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.602150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.602338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.602371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.602559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.602592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.602762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.602794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.602966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.602999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.603188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.603221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.603399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.603432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.603633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.603666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.603833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.603872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.604060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.604093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:18.456 [2024-07-13 22:20:37.604256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.604289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:37:18.456 [2024-07-13 22:20:37.604465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.604497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.604686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:18.456 [2024-07-13 22:20:37.604718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:18.456 [2024-07-13 22:20:37.604884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.604919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.605127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.605160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.605379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.605414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.605575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.605619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.605802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.605834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.606047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.606081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.606297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.606354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.606574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.606610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.606783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.606818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.606993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.607028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.607187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.607221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.607408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.607442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.607607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.607641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.607832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.607884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.608084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.608117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.608306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.608341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.456 [2024-07-13 22:20:37.608540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.456 [2024-07-13 22:20:37.608574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.456 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.608781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.608815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.609037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.609071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.609262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.609316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.609514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.609550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.609709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.609743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.609925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.609960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.610139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.610180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.610337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.610371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.610554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.610587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.610804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.610837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.611016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.611051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.611241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.611274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.611439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.611473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.611666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.611698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.611873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.611917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.612074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.612107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.612307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.612339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.612495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.612528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.612738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.612771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.612934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.612968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.613147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.613185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.613367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.613400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.613571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.613603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.613756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.613789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.613978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.614012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.614196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.614229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.614402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.614435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.614591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.614623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.614812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.614844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.615046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.615094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.615276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.615314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.615490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.615525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.615728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.615773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.615941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.615977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.457 [2024-07-13 22:20:37.616168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.457 [2024-07-13 22:20:37.616201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.457 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.616423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.616457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.616629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.616663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.616831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.616875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.617074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.617108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.617289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.617323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.617490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.617525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.617712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.617752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.617936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.617974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.618140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.618173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.618359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.618393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.618581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.618613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.618778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.618820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.619012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.619045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.619244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.619277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.619461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.619501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.619655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.619688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.619860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.619909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.620110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.620143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.620327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.620360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.620537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.620569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.620758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.620792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.620993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.621027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.621191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.621224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.621430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.621463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.621626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.621658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.621818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.621852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.622063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.622111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.622312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.622346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.622538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.622574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.622737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.622778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.622938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.622971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.623125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.623158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.623333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.623367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.623550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.623583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.623763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.623796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.623980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.624013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.624175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.624208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.624368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.624412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.624580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.624613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.624806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.458 [2024-07-13 22:20:37.624837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.458 qpair failed and we were unable to recover it. 00:37:18.458 [2024-07-13 22:20:37.625017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.625049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.625210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.625245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.625426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.625459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.625647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.625679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.625862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.625900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.626079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.626111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.626269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.626301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.626483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.626520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.626712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.626744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.626953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.626987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.627141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.627173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:18.459 [2024-07-13 22:20:37.627349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.627383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:18.459 [2024-07-13 22:20:37.627561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.627594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:18.459 [2024-07-13 22:20:37.627778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.627812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:18.459 [2024-07-13 22:20:37.628008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.628057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.628232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.628269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.628436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.628470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.628686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.628720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.628891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.628934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.629130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.629173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.629360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.629394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.629588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.629621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.629781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.629813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.629974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.630006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.630184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.630216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.630380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.630413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.630594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.630626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.630787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.630819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.630982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.631014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.631197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.631244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.631414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.631452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.631610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.631644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.631801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.631840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.632041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.632075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.632274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.632309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.632480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.632514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.632676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.632710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.632899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.632941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.633101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.459 [2024-07-13 22:20:37.633132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.459 qpair failed and we were unable to recover it. 00:37:18.459 [2024-07-13 22:20:37.633319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.633351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.633505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.633537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.633717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.633750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.633942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.633975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.634156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.634210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.634403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.634440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.634631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.634665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.634853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.634896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.635105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.635138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.635311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.635344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.635513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.635550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.635735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.635768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.635952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.636004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.636197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.636232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.636397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.636430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.636598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.636631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.636790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.636822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.636985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.637017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.637216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.637250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.637413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.637445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.637603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.637635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.637828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.637860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.638086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.638118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.638311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.638342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.638531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.638563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.638723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.638756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.638951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.638984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.639174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.639214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.639387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.639420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.639634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.639666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.639830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.639862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.640037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.640071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.640290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.640339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.640557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.640598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.640872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.640906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.641085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.641118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.641344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.641377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.641541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.641574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.641741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.641775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.641951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.641987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.642166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.460 [2024-07-13 22:20:37.642207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.460 qpair failed and we were unable to recover it. 00:37:18.460 [2024-07-13 22:20:37.642373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.642406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.642594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.642627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.642813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.642846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.643042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.643076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.643242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.643276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.643462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.643495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.643677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.643711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.643880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.643914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.644135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.644168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.644333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.644366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.644550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.644583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.644773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.644806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.645028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.645062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.645259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.645292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.645483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.645516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.645675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.645710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.645891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.645927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.646086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.646119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.646422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.646455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.646654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.646687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.646882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.646916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.647099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.647132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.647307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.647341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.647593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.647626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.647820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.647854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.648033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.648066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.648231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.648264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.648451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.648485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.648645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.648678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.648835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.648875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.649043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.649076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.649346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.649379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.649591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.649629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.649846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.649887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.650056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.650088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.650320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.650372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.650568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.650604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.650791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.650825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.651028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.461 [2024-07-13 22:20:37.651062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.461 qpair failed and we were unable to recover it. 00:37:18.461 [2024-07-13 22:20:37.651270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.651321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.651519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.651554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.651747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.651781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.651945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.651979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.652145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.652177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.652375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.652407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.652584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.652619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.652816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.652850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.653072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.653105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.653318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.653352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.653516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.653549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.653711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.653744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.653960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.653995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.654197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.654230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.654425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.654458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.654616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.654649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.654825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.654858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.655058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.655104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.655341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.655376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.655546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.655580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.655752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.655785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.655978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.656012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.656257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.656307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.656510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.656545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.656757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.656792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.656986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.657033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.657211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.657270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.657480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.657514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.657702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.657736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.657898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.462 [2024-07-13 22:20:37.657940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.462 qpair failed and we were unable to recover it. 00:37:18.462 [2024-07-13 22:20:37.658129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.658161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.658385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.658417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.658578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.658610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.658799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.658837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.659024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.659057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.659221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.659253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.659468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.659500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.659687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.659720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.660100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.660134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.660330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.660362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.660531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.660565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.660731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.660763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.660933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.660966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.661128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.661160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.661398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.661430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.661598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.661631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.661802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.661835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.662039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.662071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.662268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.662301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.662453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.662485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.662653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.662686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.662855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.662892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.663066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.663099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.663314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.663346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.663506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.663537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.663728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.663761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.663942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.663990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.664182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.664236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.664404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.664439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.664606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.664640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.664837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.664881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.665060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.665094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.665274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.665310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.665478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.665512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.665715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.665762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.665966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.666003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.666181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.666229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.666401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.666438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.666649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.666682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.666861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.666901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.463 [2024-07-13 22:20:37.667098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.463 [2024-07-13 22:20:37.667133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.463 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.667336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.667370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.667556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.667589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.667780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.667820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.667992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.668026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.668198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.668234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.668433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.668467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.668651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.668684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.668888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.668928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.669091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.669124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.669362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.669398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.669588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.669633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.669795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.669829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.670021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.670055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.670278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.670311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.670477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.670510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.670697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.670730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.670929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.670977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.671182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.671237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.671420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.671467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.671637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.671671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.671858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.671898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.672060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.672092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.672274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.672305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.672482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.672514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.672679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.672720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.672922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.672957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.673118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.673151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.673346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.673379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.673568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.673603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.673807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.673855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.674037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.674073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.674240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.674274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.674488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.674520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.674684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.674716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.674877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.674910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.675067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.675099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.675288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.675320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.675490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.675529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.675683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.675716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.675883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.675915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.464 qpair failed and we were unable to recover it. 00:37:18.464 [2024-07-13 22:20:37.676087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.464 [2024-07-13 22:20:37.676124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.676327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.676363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.676549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.676588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.676770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.676804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.676965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.676999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.677200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.677247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.677449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.677484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.677644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.677677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.677841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.677881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.678047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.678079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.678239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.678271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.678435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.678468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.678622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.678655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.678844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.678898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.679083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.679119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.679287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.679321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.679508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.679540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.679712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.679748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.679923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.679969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.680135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.680169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.680326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.680359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.680543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.680575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.680728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.680760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.680967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.681014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.681241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.681275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.681472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.681506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.681692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.681725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.681919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.681954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.682153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.682201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.682397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.682432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.682601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.682633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.682793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.682826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.683016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.683048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.683211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.683242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.683405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.683438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.683595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.683628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.683816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.683871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.684038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.684070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.684245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.684276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.684438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.684470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.684658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.684691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.465 qpair failed and we were unable to recover it. 00:37:18.465 [2024-07-13 22:20:37.684860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.465 [2024-07-13 22:20:37.684899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.685058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.685095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.685300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.685331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.685512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.685545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.685722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.685754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.685920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.685953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.686157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.686204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.686404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.686439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.686602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.686636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.686797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.686830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.687055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.687103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.687312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.687348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.687515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.687551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.687742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.687776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.687940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.687975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.688161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.688208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.688403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.688438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.688622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.688655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.688848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.688892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.689068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.689101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.689280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.689312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.689499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.689532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.689740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.689773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.689962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.689996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.690158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.690191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.690372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.690404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.690613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.690645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.690805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.690837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.691051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.691099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.691294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.691330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.691507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.691541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.691726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.691760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.691931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.691966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.692158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.692206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.466 [2024-07-13 22:20:37.692393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.466 [2024-07-13 22:20:37.692429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.466 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.692625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.692660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.692815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.692849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.693066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.693112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.693306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.693341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.693515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.693548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.693762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.693796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.693987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.694027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.694202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.694250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.694465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.694501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.694690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.694724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.694888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.694922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.695126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.695173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.695359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.695395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.695592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.695627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.695815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.695849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.696033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.696081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.696311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.696346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.696541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.696574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.696735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.696768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.696931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.696966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.697182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.697229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.697423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.697471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.697658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.697692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.697880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.697914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.698106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.698140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.698364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.698411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.698577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.698612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.698808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.698843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.699032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.699067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.699271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.699305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.699491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.699524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.699697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.699730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.699920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.699954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.700162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.700211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.700410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.700446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.700618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.700653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.700818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.700853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.701068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.701114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.701286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.701321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.701488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.467 [2024-07-13 22:20:37.701520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.467 qpair failed and we were unable to recover it. 00:37:18.467 [2024-07-13 22:20:37.701679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.701711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.701896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.701930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.702086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.702118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.702326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.702359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.702555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.702588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.702755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.702787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.702966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.703020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.703183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.703219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.703439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.703472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.703632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.703665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.703881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.703916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.704077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.704110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.704323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.704358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.704544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.704576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.704758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.704790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.705004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.705038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.705196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.705228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.705388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.705420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.705577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.705609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.705767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.705799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.705990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.706039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 Malloc0 00:37:18.468 [2024-07-13 22:20:37.706234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.706282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.706488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:18.468 [2024-07-13 22:20:37.706525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:37:18.468 [2024-07-13 22:20:37.706708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.706742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:18.468 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:18.468 [2024-07-13 22:20:37.706929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.706963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.707150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.707184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.707383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.707417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.707600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.707633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.707817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.707849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.708024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.708056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.708238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.708287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.708484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.708520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.708692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.708727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.708919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.708954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.709123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.709156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.709321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.709354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.709568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.709602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.709760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.709793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.468 [2024-07-13 22:20:37.709780] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.709968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.468 [2024-07-13 22:20:37.710004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.468 qpair failed and we were unable to recover it. 00:37:18.468 [2024-07-13 22:20:37.710169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.710202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.710404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.710438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.710599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.710633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.710848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.710888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.711054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.711087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.711247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.711280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.711478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.711512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.711730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.711764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.711927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.711961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.712150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.712198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.712406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.712454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.712622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.712657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.712818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.712852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.713025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.713059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.713240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.713286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.713491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.713526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.713682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.713714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.713881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.713915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.714101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.714134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.714329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.714362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.714548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.714580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.714740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.714772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.714955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.715003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.715214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.715261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.715432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.715466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.715631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.715665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.715827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.715860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.716026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.716059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.716244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.716278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.716461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.716507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.716689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.716722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.716891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.716924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.717085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.717122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.717332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.717364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.717558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.717592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.717780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.717813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.718002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.718036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.718214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.718247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.718436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.718469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.718649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.718682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.469 [2024-07-13 22:20:37.718875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.469 [2024-07-13 22:20:37.718908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.469 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.719095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.719142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.719313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.719348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.719539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.719572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.719752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:18.470 [2024-07-13 22:20:37.719786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:18.470 [2024-07-13 22:20:37.719946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.719980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:18.470 [2024-07-13 22:20:37.720171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:18.470 [2024-07-13 22:20:37.720205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.720382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.720416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.720584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.720616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.720780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.720812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.720997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.721029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.721201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.721233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.721388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.721420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.721604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.721637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.721797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.721829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.722006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.722054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.722242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.722289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.722522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.722559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.722750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.722785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.722953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.722987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000210000 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.723184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.723232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.723433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.723480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.723681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.723716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.723884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.723917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.724095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.724128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.724324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.724356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.724546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.724577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.724765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.724797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.724957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.724990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.725161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.725193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.725384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.725422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.725574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.725606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.725792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.725824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.726013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.470 [2024-07-13 22:20:37.726061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.470 qpair failed and we were unable to recover it. 00:37:18.470 [2024-07-13 22:20:37.726306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.726353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.726525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.726560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.726725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.726759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.726947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.726981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.727151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.727185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.727369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.727401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.727563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.727596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:18.471 [2024-07-13 22:20:37.727770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.727804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:18.471 [2024-07-13 22:20:37.728020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:18.471 [2024-07-13 22:20:37.728066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:18.471 [2024-07-13 22:20:37.728237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.728272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.728440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.728473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.728635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.728666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.728826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.728858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.729026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.729059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.729219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.729252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.729437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.729469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.729618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.729649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.729813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.729848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.730073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.730120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.730318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.730354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.730523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.730557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.730754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.730791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.730986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.731019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.731191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.731225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.731438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.731470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.731657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.731688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.731886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.731918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.732112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.732144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.732328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.732360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.732542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.732573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.732739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.732770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.732937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.732969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.733140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.733173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.733339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.733371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.733556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.733589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.733760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.733792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.733959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.733993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.734151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.734183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.734371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.471 [2024-07-13 22:20:37.734403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.471 qpair failed and we were unable to recover it. 00:37:18.471 [2024-07-13 22:20:37.734611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.472 [2024-07-13 22:20:37.734644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 [2024-07-13 22:20:37.734829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.472 [2024-07-13 22:20:37.734861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 [2024-07-13 22:20:37.735055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.472 [2024-07-13 22:20:37.735087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 [2024-07-13 22:20:37.735293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.472 [2024-07-13 22:20:37.735324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 [2024-07-13 22:20:37.735514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.472 [2024-07-13 22:20:37.735546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 [2024-07-13 22:20:37.735701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.472 [2024-07-13 22:20:37.735732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:18.472 [2024-07-13 22:20:37.735905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.472 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:18.472 [2024-07-13 22:20:37.735939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:18.472 [2024-07-13 22:20:37.736115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.472 [2024-07-13 22:20:37.736147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f278 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:18.472 0 with addr=10.0.0.2, port=4420 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 [2024-07-13 22:20:37.736344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.472 [2024-07-13 22:20:37.736376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 [2024-07-13 22:20:37.736542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.472 [2024-07-13 22:20:37.736575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 [2024-07-13 22:20:37.736784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.472 [2024-07-13 22:20:37.736829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001f2780 with addr=10.0.0.2, port=4420 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 [2024-07-13 22:20:37.737020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.472 [2024-07-13 22:20:37.737068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 [2024-07-13 22:20:37.737249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.472 [2024-07-13 22:20:37.737296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 [2024-07-13 22:20:37.737477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.472 [2024-07-13 22:20:37.737513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 [2024-07-13 22:20:37.737685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.472 [2024-07-13 22:20:37.737719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 [2024-07-13 22:20:37.737884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.472 [2024-07-13 22:20:37.737918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 [2024-07-13 22:20:37.738095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.472 [2024-07-13 22:20:37.738128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 [2024-07-13 22:20:37.738298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.472 [2024-07-13 22:20:37.738331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 [2024-07-13 22:20:37.738484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.472 [2024-07-13 22:20:37.738516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 [2024-07-13 22:20:37.738676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.472 [2024-07-13 22:20:37.738709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 [2024-07-13 22:20:37.738886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.472 [2024-07-13 22:20:37.738919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150001ffe80 with addr=10.0.0.2, port=4420 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 [2024-07-13 22:20:37.739114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.472 [2024-07-13 22:20:37.739162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 [2024-07-13 22:20:37.739364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.472 [2024-07-13 22:20:37.739401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 [2024-07-13 22:20:37.739580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:37:18.472 [2024-07-13 22:20:37.739614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500021ff00 with addr=10.0.0.2, port=4420 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 [2024-07-13 22:20:37.739835] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:18.472 [2024-07-13 22:20:37.741444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.472 [2024-07-13 22:20:37.741651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.472 [2024-07-13 22:20:37.741687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.472 [2024-07-13 22:20:37.741712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.472 [2024-07-13 22:20:37.741733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500021ff00 00:37:18.472 [2024-07-13 22:20:37.741789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:18.472 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:18.472 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:18.472 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:18.472 [2024-07-13 22:20:37.751174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.472 [2024-07-13 22:20:37.751411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.472 [2024-07-13 22:20:37.751444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.472 [2024-07-13 22:20:37.751483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.472 [2024-07-13 22:20:37.751502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500021ff00 00:37:18.472 [2024-07-13 22:20:37.751544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:18.472 22:20:37 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 53502 00:37:18.472 [2024-07-13 22:20:37.761186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.472 [2024-07-13 22:20:37.761368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.472 [2024-07-13 22:20:37.761408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.472 [2024-07-13 22:20:37.761432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.472 [2024-07-13 22:20:37.761451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500021ff00 00:37:18.472 [2024-07-13 22:20:37.761491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 [2024-07-13 22:20:37.771211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.472 [2024-07-13 22:20:37.771402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.472 [2024-07-13 22:20:37.771435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.472 [2024-07-13 22:20:37.771458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.472 [2024-07-13 22:20:37.771477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500021ff00 00:37:18.472 [2024-07-13 22:20:37.771518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:18.472 qpair failed and we were unable to recover it. 00:37:18.472 [2024-07-13 22:20:37.781263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.472 [2024-07-13 22:20:37.781472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.472 [2024-07-13 22:20:37.781504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.472 [2024-07-13 22:20:37.781526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.473 [2024-07-13 22:20:37.781545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500021ff00 00:37:18.473 [2024-07-13 22:20:37.781599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:18.473 qpair failed and we were unable to recover it. 00:37:18.473 [2024-07-13 22:20:37.791251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.473 [2024-07-13 22:20:37.791439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.473 [2024-07-13 22:20:37.791473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.473 [2024-07-13 22:20:37.791495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.473 [2024-07-13 22:20:37.791513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500021ff00 00:37:18.473 [2024-07-13 22:20:37.791556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:18.473 qpair failed and we were unable to recover it. 00:37:18.473 [2024-07-13 22:20:37.801238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.473 [2024-07-13 22:20:37.801413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.473 [2024-07-13 22:20:37.801447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.473 [2024-07-13 22:20:37.801470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.473 [2024-07-13 22:20:37.801488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500021ff00 00:37:18.473 [2024-07-13 22:20:37.801535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:18.473 qpair failed and we were unable to recover it. 00:37:18.473 [2024-07-13 22:20:37.811324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.473 [2024-07-13 22:20:37.811515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.473 [2024-07-13 22:20:37.811549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.473 [2024-07-13 22:20:37.811571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.473 [2024-07-13 22:20:37.811590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500021ff00 00:37:18.473 [2024-07-13 22:20:37.811631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:18.473 qpair failed and we were unable to recover it. 00:37:18.473 [2024-07-13 22:20:37.821300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.473 [2024-07-13 22:20:37.821475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.473 [2024-07-13 22:20:37.821509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.473 [2024-07-13 22:20:37.821531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.473 [2024-07-13 22:20:37.821550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500021ff00 00:37:18.473 [2024-07-13 22:20:37.821590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:18.473 qpair failed and we were unable to recover it. 00:37:18.737 [2024-07-13 22:20:37.831464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.737 [2024-07-13 22:20:37.831702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.737 [2024-07-13 22:20:37.831734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.737 [2024-07-13 22:20:37.831757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.737 [2024-07-13 22:20:37.831775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500021ff00 00:37:18.737 [2024-07-13 22:20:37.831829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:18.737 qpair failed and we were unable to recover it. 00:37:18.737 [2024-07-13 22:20:37.841400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.737 [2024-07-13 22:20:37.841582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.737 [2024-07-13 22:20:37.841634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.737 [2024-07-13 22:20:37.841656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.737 [2024-07-13 22:20:37.841675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500021ff00 00:37:18.737 [2024-07-13 22:20:37.841737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:18.737 qpair failed and we were unable to recover it. 00:37:18.737 [2024-07-13 22:20:37.851387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.737 [2024-07-13 22:20:37.851574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.737 [2024-07-13 22:20:37.851614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.737 [2024-07-13 22:20:37.851637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.737 [2024-07-13 22:20:37.851655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500021ff00 00:37:18.737 [2024-07-13 22:20:37.851696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:18.737 qpair failed and we were unable to recover it. 00:37:18.737 [2024-07-13 22:20:37.861446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.737 [2024-07-13 22:20:37.861675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.737 [2024-07-13 22:20:37.861709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.737 [2024-07-13 22:20:37.861732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.737 [2024-07-13 22:20:37.861751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500021ff00 00:37:18.737 [2024-07-13 22:20:37.861791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:18.737 qpair failed and we were unable to recover it. 00:37:18.737 [2024-07-13 22:20:37.871538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.737 [2024-07-13 22:20:37.871744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.737 [2024-07-13 22:20:37.871776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.737 [2024-07-13 22:20:37.871798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.738 [2024-07-13 22:20:37.871816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500021ff00 00:37:18.738 [2024-07-13 22:20:37.871890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:18.738 qpair failed and we were unable to recover it. 00:37:18.738 [2024-07-13 22:20:37.881498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.738 [2024-07-13 22:20:37.881699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.738 [2024-07-13 22:20:37.881731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.738 [2024-07-13 22:20:37.881753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.738 [2024-07-13 22:20:37.881771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500021ff00 00:37:18.738 [2024-07-13 22:20:37.881838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:18.738 qpair failed and we were unable to recover it. 00:37:18.738 [2024-07-13 22:20:37.891565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.738 [2024-07-13 22:20:37.891759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.738 [2024-07-13 22:20:37.891793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.738 [2024-07-13 22:20:37.891816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.738 [2024-07-13 22:20:37.891834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500021ff00 00:37:18.738 [2024-07-13 22:20:37.891889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:18.738 qpair failed and we were unable to recover it. 00:37:18.738 [2024-07-13 22:20:37.901570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.738 [2024-07-13 22:20:37.901793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.738 [2024-07-13 22:20:37.901832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.738 [2024-07-13 22:20:37.901855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.738 [2024-07-13 22:20:37.901882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500021ff00 00:37:18.738 [2024-07-13 22:20:37.901925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:18.738 qpair failed and we were unable to recover it. 00:37:18.738 [2024-07-13 22:20:37.911634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.738 [2024-07-13 22:20:37.911841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.738 [2024-07-13 22:20:37.911904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.738 [2024-07-13 22:20:37.911933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.738 [2024-07-13 22:20:37.911955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:18.738 [2024-07-13 22:20:37.911999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:18.738 qpair failed and we were unable to recover it. 00:37:18.738 [2024-07-13 22:20:37.921595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.738 [2024-07-13 22:20:37.921774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.738 [2024-07-13 22:20:37.921808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.738 [2024-07-13 22:20:37.921831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.738 [2024-07-13 22:20:37.921850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:18.738 [2024-07-13 22:20:37.921897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:18.738 qpair failed and we were unable to recover it. 00:37:18.738 [2024-07-13 22:20:37.931737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.738 [2024-07-13 22:20:37.931929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.738 [2024-07-13 22:20:37.931963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.738 [2024-07-13 22:20:37.931986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.738 [2024-07-13 22:20:37.932008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:18.738 [2024-07-13 22:20:37.932048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:18.738 qpair failed and we were unable to recover it. 00:37:18.738 [2024-07-13 22:20:37.941676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.738 [2024-07-13 22:20:37.941883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.738 [2024-07-13 22:20:37.941926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.738 [2024-07-13 22:20:37.941949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.738 [2024-07-13 22:20:37.941967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:18.738 [2024-07-13 22:20:37.942007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:18.738 qpair failed and we were unable to recover it. 00:37:18.738 [2024-07-13 22:20:37.951734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.738 [2024-07-13 22:20:37.951934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.738 [2024-07-13 22:20:37.951969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.738 [2024-07-13 22:20:37.951991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.738 [2024-07-13 22:20:37.952009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:18.738 [2024-07-13 22:20:37.952052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:18.738 qpair failed and we were unable to recover it. 00:37:18.738 [2024-07-13 22:20:37.961713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.738 [2024-07-13 22:20:37.961917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.738 [2024-07-13 22:20:37.961964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.738 [2024-07-13 22:20:37.962000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.738 [2024-07-13 22:20:37.962030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:18.738 [2024-07-13 22:20:37.962088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:18.738 qpair failed and we were unable to recover it. 00:37:18.738 [2024-07-13 22:20:37.971769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.738 [2024-07-13 22:20:37.971949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.738 [2024-07-13 22:20:37.971983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.738 [2024-07-13 22:20:37.972005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.738 [2024-07-13 22:20:37.972023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:18.738 [2024-07-13 22:20:37.972064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:18.738 qpair failed and we were unable to recover it. 00:37:18.738 [2024-07-13 22:20:37.981857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.738 [2024-07-13 22:20:37.982055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.738 [2024-07-13 22:20:37.982088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.738 [2024-07-13 22:20:37.982110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.738 [2024-07-13 22:20:37.982134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:18.738 [2024-07-13 22:20:37.982175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:18.738 qpair failed and we were unable to recover it. 00:37:18.738 [2024-07-13 22:20:37.991862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.738 [2024-07-13 22:20:37.992068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.738 [2024-07-13 22:20:37.992101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.738 [2024-07-13 22:20:37.992123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.738 [2024-07-13 22:20:37.992141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:18.738 [2024-07-13 22:20:37.992180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:18.738 qpair failed and we were unable to recover it. 00:37:18.738 [2024-07-13 22:20:38.001940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.738 [2024-07-13 22:20:38.002113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.738 [2024-07-13 22:20:38.002147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.738 [2024-07-13 22:20:38.002169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.738 [2024-07-13 22:20:38.002187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:18.738 [2024-07-13 22:20:38.002227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:18.738 qpair failed and we were unable to recover it. 00:37:18.738 [2024-07-13 22:20:38.011880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.738 [2024-07-13 22:20:38.012091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.738 [2024-07-13 22:20:38.012124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.738 [2024-07-13 22:20:38.012146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.738 [2024-07-13 22:20:38.012181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:18.739 [2024-07-13 22:20:38.012220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:18.739 qpair failed and we were unable to recover it. 00:37:18.739 [2024-07-13 22:20:38.021946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.739 [2024-07-13 22:20:38.022132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.739 [2024-07-13 22:20:38.022166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.739 [2024-07-13 22:20:38.022188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.739 [2024-07-13 22:20:38.022207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:18.739 [2024-07-13 22:20:38.022247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:18.739 qpair failed and we were unable to recover it. 00:37:18.739 [2024-07-13 22:20:38.031972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.739 [2024-07-13 22:20:38.032168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.739 [2024-07-13 22:20:38.032216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.739 [2024-07-13 22:20:38.032239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.739 [2024-07-13 22:20:38.032257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:18.739 [2024-07-13 22:20:38.032310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:18.739 qpair failed and we were unable to recover it. 00:37:18.739 [2024-07-13 22:20:38.041939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.739 [2024-07-13 22:20:38.042113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.739 [2024-07-13 22:20:38.042146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.739 [2024-07-13 22:20:38.042169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.739 [2024-07-13 22:20:38.042188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:18.739 [2024-07-13 22:20:38.042227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:18.739 qpair failed and we were unable to recover it. 00:37:18.739 [2024-07-13 22:20:38.052012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.739 [2024-07-13 22:20:38.052210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.739 [2024-07-13 22:20:38.052244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.739 [2024-07-13 22:20:38.052266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.739 [2024-07-13 22:20:38.052285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:18.739 [2024-07-13 22:20:38.052325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:18.739 qpair failed and we were unable to recover it. 00:37:18.739 [2024-07-13 22:20:38.061998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.739 [2024-07-13 22:20:38.062177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.739 [2024-07-13 22:20:38.062226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.739 [2024-07-13 22:20:38.062248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.739 [2024-07-13 22:20:38.062266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:18.739 [2024-07-13 22:20:38.062319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:18.739 qpair failed and we were unable to recover it. 00:37:18.739 [2024-07-13 22:20:38.072188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.739 [2024-07-13 22:20:38.072401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.739 [2024-07-13 22:20:38.072433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.739 [2024-07-13 22:20:38.072460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.739 [2024-07-13 22:20:38.072479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:18.739 [2024-07-13 22:20:38.072533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:18.739 qpair failed and we were unable to recover it. 00:37:18.739 [2024-07-13 22:20:38.082142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.739 [2024-07-13 22:20:38.082342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.739 [2024-07-13 22:20:38.082378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.739 [2024-07-13 22:20:38.082400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.739 [2024-07-13 22:20:38.082418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:18.739 [2024-07-13 22:20:38.082471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:18.739 qpair failed and we were unable to recover it. 00:37:18.739 [2024-07-13 22:20:38.092080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.739 [2024-07-13 22:20:38.092266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.739 [2024-07-13 22:20:38.092299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.739 [2024-07-13 22:20:38.092320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.739 [2024-07-13 22:20:38.092339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:18.739 [2024-07-13 22:20:38.092378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:18.739 qpair failed and we were unable to recover it. 00:37:18.739 [2024-07-13 22:20:38.102135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.739 [2024-07-13 22:20:38.102314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.739 [2024-07-13 22:20:38.102347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.739 [2024-07-13 22:20:38.102369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.739 [2024-07-13 22:20:38.102387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:18.739 [2024-07-13 22:20:38.102427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:18.739 qpair failed and we were unable to recover it. 00:37:18.739 [2024-07-13 22:20:38.112215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.739 [2024-07-13 22:20:38.112489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.739 [2024-07-13 22:20:38.112521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.739 [2024-07-13 22:20:38.112548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.739 [2024-07-13 22:20:38.112568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:18.739 [2024-07-13 22:20:38.112622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:18.739 qpair failed and we were unable to recover it. 00:37:18.739 [2024-07-13 22:20:38.122253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:18.739 [2024-07-13 22:20:38.122486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:18.739 [2024-07-13 22:20:38.122519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:18.739 [2024-07-13 22:20:38.122541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:18.739 [2024-07-13 22:20:38.122559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:18.739 [2024-07-13 22:20:38.122612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:18.739 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-13 22:20:38.132239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.000 [2024-07-13 22:20:38.132426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.000 [2024-07-13 22:20:38.132460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.000 [2024-07-13 22:20:38.132483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.000 [2024-07-13 22:20:38.132501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.000 [2024-07-13 22:20:38.132541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-13 22:20:38.142217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.000 [2024-07-13 22:20:38.142399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.000 [2024-07-13 22:20:38.142433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.000 [2024-07-13 22:20:38.142455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.000 [2024-07-13 22:20:38.142473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.000 [2024-07-13 22:20:38.142513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-13 22:20:38.152318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.000 [2024-07-13 22:20:38.152525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.000 [2024-07-13 22:20:38.152557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.000 [2024-07-13 22:20:38.152579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.000 [2024-07-13 22:20:38.152596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.000 [2024-07-13 22:20:38.152650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-13 22:20:38.162315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.000 [2024-07-13 22:20:38.162536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.000 [2024-07-13 22:20:38.162571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.000 [2024-07-13 22:20:38.162613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.000 [2024-07-13 22:20:38.162634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.000 [2024-07-13 22:20:38.162674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-13 22:20:38.172328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.000 [2024-07-13 22:20:38.172507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.000 [2024-07-13 22:20:38.172540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.000 [2024-07-13 22:20:38.172562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.000 [2024-07-13 22:20:38.172580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.000 [2024-07-13 22:20:38.172620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-13 22:20:38.182431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.000 [2024-07-13 22:20:38.182642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.000 [2024-07-13 22:20:38.182675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.000 [2024-07-13 22:20:38.182696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.000 [2024-07-13 22:20:38.182714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.000 [2024-07-13 22:20:38.182768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-13 22:20:38.192416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.000 [2024-07-13 22:20:38.192606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.000 [2024-07-13 22:20:38.192639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.000 [2024-07-13 22:20:38.192660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.000 [2024-07-13 22:20:38.192678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.000 [2024-07-13 22:20:38.192718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-13 22:20:38.202391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.000 [2024-07-13 22:20:38.202557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.000 [2024-07-13 22:20:38.202590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.000 [2024-07-13 22:20:38.202611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.000 [2024-07-13 22:20:38.202630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.000 [2024-07-13 22:20:38.202668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-13 22:20:38.212449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.000 [2024-07-13 22:20:38.212633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.000 [2024-07-13 22:20:38.212666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.000 [2024-07-13 22:20:38.212688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.000 [2024-07-13 22:20:38.212707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.000 [2024-07-13 22:20:38.212746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-13 22:20:38.222437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.000 [2024-07-13 22:20:38.222640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.000 [2024-07-13 22:20:38.222673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.000 [2024-07-13 22:20:38.222695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.000 [2024-07-13 22:20:38.222714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.000 [2024-07-13 22:20:38.222753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-13 22:20:38.232553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.000 [2024-07-13 22:20:38.232762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.000 [2024-07-13 22:20:38.232794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.000 [2024-07-13 22:20:38.232814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.000 [2024-07-13 22:20:38.232831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.000 [2024-07-13 22:20:38.232896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.000 qpair failed and we were unable to recover it. 00:37:19.000 [2024-07-13 22:20:38.242540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.000 [2024-07-13 22:20:38.242717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.000 [2024-07-13 22:20:38.242752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.001 [2024-07-13 22:20:38.242774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.001 [2024-07-13 22:20:38.242793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.001 [2024-07-13 22:20:38.242832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-13 22:20:38.252581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.001 [2024-07-13 22:20:38.252770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.001 [2024-07-13 22:20:38.252808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.001 [2024-07-13 22:20:38.252832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.001 [2024-07-13 22:20:38.252850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.001 [2024-07-13 22:20:38.252900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-13 22:20:38.262570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.001 [2024-07-13 22:20:38.262742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.001 [2024-07-13 22:20:38.262775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.001 [2024-07-13 22:20:38.262797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.001 [2024-07-13 22:20:38.262816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.001 [2024-07-13 22:20:38.262860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-13 22:20:38.272650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.001 [2024-07-13 22:20:38.272847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.001 [2024-07-13 22:20:38.272888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.001 [2024-07-13 22:20:38.272912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.001 [2024-07-13 22:20:38.272930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.001 [2024-07-13 22:20:38.272971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-13 22:20:38.282592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.001 [2024-07-13 22:20:38.282769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.001 [2024-07-13 22:20:38.282803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.001 [2024-07-13 22:20:38.282825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.001 [2024-07-13 22:20:38.282844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.001 [2024-07-13 22:20:38.282892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-13 22:20:38.292684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.001 [2024-07-13 22:20:38.292877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.001 [2024-07-13 22:20:38.292911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.001 [2024-07-13 22:20:38.292933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.001 [2024-07-13 22:20:38.292952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.001 [2024-07-13 22:20:38.292998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-13 22:20:38.302724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.001 [2024-07-13 22:20:38.302920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.001 [2024-07-13 22:20:38.302954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.001 [2024-07-13 22:20:38.302977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.001 [2024-07-13 22:20:38.302995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.001 [2024-07-13 22:20:38.303036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-13 22:20:38.312741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.001 [2024-07-13 22:20:38.312940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.001 [2024-07-13 22:20:38.312974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.001 [2024-07-13 22:20:38.312997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.001 [2024-07-13 22:20:38.313014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.001 [2024-07-13 22:20:38.313053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-13 22:20:38.322743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.001 [2024-07-13 22:20:38.322924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.001 [2024-07-13 22:20:38.322957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.001 [2024-07-13 22:20:38.322979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.001 [2024-07-13 22:20:38.322998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.001 [2024-07-13 22:20:38.323038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-13 22:20:38.332816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.001 [2024-07-13 22:20:38.333016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.001 [2024-07-13 22:20:38.333050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.001 [2024-07-13 22:20:38.333072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.001 [2024-07-13 22:20:38.333090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.001 [2024-07-13 22:20:38.333129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-13 22:20:38.342810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.001 [2024-07-13 22:20:38.342999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.001 [2024-07-13 22:20:38.343037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.001 [2024-07-13 22:20:38.343060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.001 [2024-07-13 22:20:38.343079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.001 [2024-07-13 22:20:38.343118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-13 22:20:38.352854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.001 [2024-07-13 22:20:38.353051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.001 [2024-07-13 22:20:38.353085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.001 [2024-07-13 22:20:38.353107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.001 [2024-07-13 22:20:38.353125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.001 [2024-07-13 22:20:38.353164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-13 22:20:38.362895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.001 [2024-07-13 22:20:38.363078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.001 [2024-07-13 22:20:38.363111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.001 [2024-07-13 22:20:38.363134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.001 [2024-07-13 22:20:38.363152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.001 [2024-07-13 22:20:38.363191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-13 22:20:38.372915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.001 [2024-07-13 22:20:38.373098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.001 [2024-07-13 22:20:38.373131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.001 [2024-07-13 22:20:38.373152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.001 [2024-07-13 22:20:38.373171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.001 [2024-07-13 22:20:38.373210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.001 [2024-07-13 22:20:38.382903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.001 [2024-07-13 22:20:38.383074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.001 [2024-07-13 22:20:38.383107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.001 [2024-07-13 22:20:38.383130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.001 [2024-07-13 22:20:38.383154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.001 [2024-07-13 22:20:38.383194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.001 qpair failed and we were unable to recover it. 00:37:19.261 [2024-07-13 22:20:38.392975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.261 [2024-07-13 22:20:38.393159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.261 [2024-07-13 22:20:38.393193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.261 [2024-07-13 22:20:38.393215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.261 [2024-07-13 22:20:38.393233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.261 [2024-07-13 22:20:38.393284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.261 qpair failed and we were unable to recover it. 00:37:19.261 [2024-07-13 22:20:38.402986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.261 [2024-07-13 22:20:38.403155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.261 [2024-07-13 22:20:38.403189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.261 [2024-07-13 22:20:38.403211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.261 [2024-07-13 22:20:38.403229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.261 [2024-07-13 22:20:38.403269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.261 qpair failed and we were unable to recover it. 00:37:19.261 [2024-07-13 22:20:38.413002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.261 [2024-07-13 22:20:38.413235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.261 [2024-07-13 22:20:38.413268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.261 [2024-07-13 22:20:38.413290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.261 [2024-07-13 22:20:38.413309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.261 [2024-07-13 22:20:38.413348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.261 qpair failed and we were unable to recover it. 00:37:19.261 [2024-07-13 22:20:38.423071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.261 [2024-07-13 22:20:38.423251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.261 [2024-07-13 22:20:38.423285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.261 [2024-07-13 22:20:38.423307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.261 [2024-07-13 22:20:38.423325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.261 [2024-07-13 22:20:38.423364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.261 qpair failed and we were unable to recover it. 00:37:19.261 [2024-07-13 22:20:38.433122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.261 [2024-07-13 22:20:38.433346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.261 [2024-07-13 22:20:38.433380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.262 [2024-07-13 22:20:38.433402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.262 [2024-07-13 22:20:38.433420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.262 [2024-07-13 22:20:38.433459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.262 qpair failed and we were unable to recover it. 00:37:19.262 [2024-07-13 22:20:38.443108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.262 [2024-07-13 22:20:38.443279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.262 [2024-07-13 22:20:38.443312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.262 [2024-07-13 22:20:38.443335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.262 [2024-07-13 22:20:38.443353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.262 [2024-07-13 22:20:38.443391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.262 qpair failed and we were unable to recover it. 00:37:19.262 [2024-07-13 22:20:38.453162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.262 [2024-07-13 22:20:38.453383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.262 [2024-07-13 22:20:38.453416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.262 [2024-07-13 22:20:38.453438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.262 [2024-07-13 22:20:38.453457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.262 [2024-07-13 22:20:38.453496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.262 qpair failed and we were unable to recover it. 00:37:19.262 [2024-07-13 22:20:38.463157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.262 [2024-07-13 22:20:38.463341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.262 [2024-07-13 22:20:38.463395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.262 [2024-07-13 22:20:38.463417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.262 [2024-07-13 22:20:38.463435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.262 [2024-07-13 22:20:38.463488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.262 qpair failed and we were unable to recover it. 00:37:19.262 [2024-07-13 22:20:38.473203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.262 [2024-07-13 22:20:38.473390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.262 [2024-07-13 22:20:38.473422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.262 [2024-07-13 22:20:38.473450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.262 [2024-07-13 22:20:38.473469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.262 [2024-07-13 22:20:38.473509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.262 qpair failed and we were unable to recover it. 00:37:19.262 [2024-07-13 22:20:38.483295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.262 [2024-07-13 22:20:38.483495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.262 [2024-07-13 22:20:38.483528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.262 [2024-07-13 22:20:38.483549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.262 [2024-07-13 22:20:38.483567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.262 [2024-07-13 22:20:38.483620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.262 qpair failed and we were unable to recover it. 00:37:19.262 [2024-07-13 22:20:38.493251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.262 [2024-07-13 22:20:38.493454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.262 [2024-07-13 22:20:38.493487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.262 [2024-07-13 22:20:38.493509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.262 [2024-07-13 22:20:38.493528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.262 [2024-07-13 22:20:38.493567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.262 qpair failed and we were unable to recover it. 00:37:19.262 [2024-07-13 22:20:38.503280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.262 [2024-07-13 22:20:38.503454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.262 [2024-07-13 22:20:38.503488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.262 [2024-07-13 22:20:38.503510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.262 [2024-07-13 22:20:38.503528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.262 [2024-07-13 22:20:38.503568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.262 qpair failed and we were unable to recover it. 00:37:19.262 [2024-07-13 22:20:38.513534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.262 [2024-07-13 22:20:38.513714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.262 [2024-07-13 22:20:38.513747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.262 [2024-07-13 22:20:38.513769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.262 [2024-07-13 22:20:38.513788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.262 [2024-07-13 22:20:38.513828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.262 qpair failed and we were unable to recover it. 00:37:19.262 [2024-07-13 22:20:38.523298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.262 [2024-07-13 22:20:38.523468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.262 [2024-07-13 22:20:38.523502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.262 [2024-07-13 22:20:38.523524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.262 [2024-07-13 22:20:38.523543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.262 [2024-07-13 22:20:38.523582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.262 qpair failed and we were unable to recover it. 00:37:19.262 [2024-07-13 22:20:38.533405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.262 [2024-07-13 22:20:38.533579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.262 [2024-07-13 22:20:38.533625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.262 [2024-07-13 22:20:38.533647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.262 [2024-07-13 22:20:38.533665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.262 [2024-07-13 22:20:38.533719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.262 qpair failed and we were unable to recover it. 00:37:19.262 [2024-07-13 22:20:38.543400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.262 [2024-07-13 22:20:38.543591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.262 [2024-07-13 22:20:38.543638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.262 [2024-07-13 22:20:38.543660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.262 [2024-07-13 22:20:38.543678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.262 [2024-07-13 22:20:38.543731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.262 qpair failed and we were unable to recover it. 00:37:19.262 [2024-07-13 22:20:38.553438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.262 [2024-07-13 22:20:38.553626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.262 [2024-07-13 22:20:38.553659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.262 [2024-07-13 22:20:38.553682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.262 [2024-07-13 22:20:38.553699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.262 [2024-07-13 22:20:38.553738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.262 qpair failed and we were unable to recover it. 00:37:19.262 [2024-07-13 22:20:38.563547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.262 [2024-07-13 22:20:38.563746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.262 [2024-07-13 22:20:38.563778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.262 [2024-07-13 22:20:38.563804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.262 [2024-07-13 22:20:38.563823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.262 [2024-07-13 22:20:38.563884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.262 qpair failed and we were unable to recover it. 00:37:19.262 [2024-07-13 22:20:38.573579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.262 [2024-07-13 22:20:38.573791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.262 [2024-07-13 22:20:38.573822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.262 [2024-07-13 22:20:38.573843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.262 [2024-07-13 22:20:38.573886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.263 [2024-07-13 22:20:38.573928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.263 qpair failed and we were unable to recover it. 00:37:19.263 [2024-07-13 22:20:38.583539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.263 [2024-07-13 22:20:38.583720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.263 [2024-07-13 22:20:38.583754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.263 [2024-07-13 22:20:38.583776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.263 [2024-07-13 22:20:38.583794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.263 [2024-07-13 22:20:38.583833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.263 qpair failed and we were unable to recover it. 00:37:19.263 [2024-07-13 22:20:38.593584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.263 [2024-07-13 22:20:38.593767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.263 [2024-07-13 22:20:38.593816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.263 [2024-07-13 22:20:38.593837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.263 [2024-07-13 22:20:38.593877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.263 [2024-07-13 22:20:38.593920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.263 qpair failed and we were unable to recover it. 00:37:19.263 [2024-07-13 22:20:38.603552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.263 [2024-07-13 22:20:38.603730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.263 [2024-07-13 22:20:38.603778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.263 [2024-07-13 22:20:38.603801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.263 [2024-07-13 22:20:38.603819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.263 [2024-07-13 22:20:38.603880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.263 qpair failed and we were unable to recover it. 00:37:19.263 [2024-07-13 22:20:38.613601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.263 [2024-07-13 22:20:38.613778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.263 [2024-07-13 22:20:38.613812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.263 [2024-07-13 22:20:38.613834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.263 [2024-07-13 22:20:38.613853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.263 [2024-07-13 22:20:38.613902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.263 qpair failed and we were unable to recover it. 00:37:19.263 [2024-07-13 22:20:38.623620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.263 [2024-07-13 22:20:38.623801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.263 [2024-07-13 22:20:38.623835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.263 [2024-07-13 22:20:38.623858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.263 [2024-07-13 22:20:38.623888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.263 [2024-07-13 22:20:38.623930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.263 qpair failed and we were unable to recover it. 00:37:19.263 [2024-07-13 22:20:38.633723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.263 [2024-07-13 22:20:38.633935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.263 [2024-07-13 22:20:38.633968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.263 [2024-07-13 22:20:38.633991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.263 [2024-07-13 22:20:38.634008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.263 [2024-07-13 22:20:38.634049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.263 qpair failed and we were unable to recover it. 00:37:19.263 [2024-07-13 22:20:38.643876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.263 [2024-07-13 22:20:38.644049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.263 [2024-07-13 22:20:38.644082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.263 [2024-07-13 22:20:38.644105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.263 [2024-07-13 22:20:38.644123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.263 [2024-07-13 22:20:38.644163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.263 qpair failed and we were unable to recover it. 00:37:19.263 [2024-07-13 22:20:38.653758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.263 [2024-07-13 22:20:38.653952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.263 [2024-07-13 22:20:38.653991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.263 [2024-07-13 22:20:38.654024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.263 [2024-07-13 22:20:38.654043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.263 [2024-07-13 22:20:38.654107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.263 qpair failed and we were unable to recover it. 00:37:19.523 [2024-07-13 22:20:38.663973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.523 [2024-07-13 22:20:38.664212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.523 [2024-07-13 22:20:38.664245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.523 [2024-07-13 22:20:38.664266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.523 [2024-07-13 22:20:38.664285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.523 [2024-07-13 22:20:38.664339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.523 qpair failed and we were unable to recover it. 00:37:19.523 [2024-07-13 22:20:38.673836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.523 [2024-07-13 22:20:38.674042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.523 [2024-07-13 22:20:38.674076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.523 [2024-07-13 22:20:38.674098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.523 [2024-07-13 22:20:38.674130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.523 [2024-07-13 22:20:38.674195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.523 qpair failed and we were unable to recover it. 00:37:19.523 [2024-07-13 22:20:38.683843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.523 [2024-07-13 22:20:38.684064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.523 [2024-07-13 22:20:38.684098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.523 [2024-07-13 22:20:38.684121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.523 [2024-07-13 22:20:38.684140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.523 [2024-07-13 22:20:38.684181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.523 qpair failed and we were unable to recover it. 00:37:19.523 [2024-07-13 22:20:38.693876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.523 [2024-07-13 22:20:38.694095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.523 [2024-07-13 22:20:38.694128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.523 [2024-07-13 22:20:38.694158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.523 [2024-07-13 22:20:38.694176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.523 [2024-07-13 22:20:38.694223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.523 qpair failed and we were unable to recover it. 00:37:19.523 [2024-07-13 22:20:38.703843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.523 [2024-07-13 22:20:38.704070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.523 [2024-07-13 22:20:38.704103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.523 [2024-07-13 22:20:38.704126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.523 [2024-07-13 22:20:38.704154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.523 [2024-07-13 22:20:38.704194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.523 qpair failed and we were unable to recover it. 00:37:19.523 [2024-07-13 22:20:38.714120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.523 [2024-07-13 22:20:38.714308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.523 [2024-07-13 22:20:38.714340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.523 [2024-07-13 22:20:38.714362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.523 [2024-07-13 22:20:38.714380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.523 [2024-07-13 22:20:38.714418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.523 qpair failed and we were unable to recover it. 00:37:19.523 [2024-07-13 22:20:38.723939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.523 [2024-07-13 22:20:38.724112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.523 [2024-07-13 22:20:38.724155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.523 [2024-07-13 22:20:38.724178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.523 [2024-07-13 22:20:38.724197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.523 [2024-07-13 22:20:38.724237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.523 qpair failed and we were unable to recover it. 00:37:19.523 [2024-07-13 22:20:38.733928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.523 [2024-07-13 22:20:38.734121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.523 [2024-07-13 22:20:38.734153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.523 [2024-07-13 22:20:38.734175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.523 [2024-07-13 22:20:38.734204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.523 [2024-07-13 22:20:38.734244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.523 qpair failed and we were unable to recover it. 00:37:19.523 [2024-07-13 22:20:38.743979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.523 [2024-07-13 22:20:38.744149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.523 [2024-07-13 22:20:38.744188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.523 [2024-07-13 22:20:38.744211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.523 [2024-07-13 22:20:38.744229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.523 [2024-07-13 22:20:38.744269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.523 qpair failed and we were unable to recover it. 00:37:19.523 [2024-07-13 22:20:38.754109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.523 [2024-07-13 22:20:38.754350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.523 [2024-07-13 22:20:38.754382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.523 [2024-07-13 22:20:38.754404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.523 [2024-07-13 22:20:38.754421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.523 [2024-07-13 22:20:38.754476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.523 qpair failed and we were unable to recover it. 00:37:19.523 [2024-07-13 22:20:38.764012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.523 [2024-07-13 22:20:38.764188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.523 [2024-07-13 22:20:38.764220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.523 [2024-07-13 22:20:38.764242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.523 [2024-07-13 22:20:38.764260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.523 [2024-07-13 22:20:38.764303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.523 qpair failed and we were unable to recover it. 00:37:19.523 [2024-07-13 22:20:38.774083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.523 [2024-07-13 22:20:38.774273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.523 [2024-07-13 22:20:38.774307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.523 [2024-07-13 22:20:38.774329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.523 [2024-07-13 22:20:38.774347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.523 [2024-07-13 22:20:38.774387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.523 qpair failed and we were unable to recover it. 00:37:19.523 [2024-07-13 22:20:38.784110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.523 [2024-07-13 22:20:38.784307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.523 [2024-07-13 22:20:38.784340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.523 [2024-07-13 22:20:38.784362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.523 [2024-07-13 22:20:38.784388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.523 [2024-07-13 22:20:38.784428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.523 qpair failed and we were unable to recover it. 00:37:19.523 [2024-07-13 22:20:38.794264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.524 [2024-07-13 22:20:38.794468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.524 [2024-07-13 22:20:38.794499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.524 [2024-07-13 22:20:38.794521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.524 [2024-07-13 22:20:38.794537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.524 [2024-07-13 22:20:38.794592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.524 qpair failed and we were unable to recover it. 00:37:19.524 [2024-07-13 22:20:38.804191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.524 [2024-07-13 22:20:38.804395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.524 [2024-07-13 22:20:38.804428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.524 [2024-07-13 22:20:38.804449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.524 [2024-07-13 22:20:38.804467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.524 [2024-07-13 22:20:38.804507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.524 qpair failed and we were unable to recover it. 00:37:19.524 [2024-07-13 22:20:38.814158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.524 [2024-07-13 22:20:38.814385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.524 [2024-07-13 22:20:38.814417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.524 [2024-07-13 22:20:38.814440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.524 [2024-07-13 22:20:38.814458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.524 [2024-07-13 22:20:38.814497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.524 qpair failed and we were unable to recover it. 00:37:19.524 [2024-07-13 22:20:38.824202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.524 [2024-07-13 22:20:38.824379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.524 [2024-07-13 22:20:38.824412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.524 [2024-07-13 22:20:38.824434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.524 [2024-07-13 22:20:38.824453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.524 [2024-07-13 22:20:38.824492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.524 qpair failed and we were unable to recover it. 00:37:19.524 [2024-07-13 22:20:38.834249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.524 [2024-07-13 22:20:38.834448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.524 [2024-07-13 22:20:38.834481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.524 [2024-07-13 22:20:38.834503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.524 [2024-07-13 22:20:38.834521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.524 [2024-07-13 22:20:38.834561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.524 qpair failed and we were unable to recover it. 00:37:19.524 [2024-07-13 22:20:38.844307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.524 [2024-07-13 22:20:38.844501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.524 [2024-07-13 22:20:38.844534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.524 [2024-07-13 22:20:38.844556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.524 [2024-07-13 22:20:38.844574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.524 [2024-07-13 22:20:38.844613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.524 qpair failed and we were unable to recover it. 00:37:19.524 [2024-07-13 22:20:38.854304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.524 [2024-07-13 22:20:38.854489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.524 [2024-07-13 22:20:38.854527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.524 [2024-07-13 22:20:38.854550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.524 [2024-07-13 22:20:38.854568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.524 [2024-07-13 22:20:38.854608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.524 qpair failed and we were unable to recover it. 00:37:19.524 [2024-07-13 22:20:38.864331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.524 [2024-07-13 22:20:38.864515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.524 [2024-07-13 22:20:38.864549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.524 [2024-07-13 22:20:38.864586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.524 [2024-07-13 22:20:38.864604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.524 [2024-07-13 22:20:38.864657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.524 qpair failed and we were unable to recover it. 00:37:19.524 [2024-07-13 22:20:38.874401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.524 [2024-07-13 22:20:38.874651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.524 [2024-07-13 22:20:38.874683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.524 [2024-07-13 22:20:38.874704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.524 [2024-07-13 22:20:38.874728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.524 [2024-07-13 22:20:38.874782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.524 qpair failed and we were unable to recover it. 00:37:19.524 [2024-07-13 22:20:38.884411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.524 [2024-07-13 22:20:38.884590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.524 [2024-07-13 22:20:38.884624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.524 [2024-07-13 22:20:38.884661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.524 [2024-07-13 22:20:38.884680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.524 [2024-07-13 22:20:38.884733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.524 qpair failed and we were unable to recover it. 00:37:19.524 [2024-07-13 22:20:38.894403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.524 [2024-07-13 22:20:38.894577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.524 [2024-07-13 22:20:38.894610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.524 [2024-07-13 22:20:38.894632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.524 [2024-07-13 22:20:38.894651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.524 [2024-07-13 22:20:38.894691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.524 qpair failed and we were unable to recover it. 00:37:19.524 [2024-07-13 22:20:38.904469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.524 [2024-07-13 22:20:38.904667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.524 [2024-07-13 22:20:38.904700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.524 [2024-07-13 22:20:38.904722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.524 [2024-07-13 22:20:38.904740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.524 [2024-07-13 22:20:38.904780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.524 qpair failed and we were unable to recover it. 00:37:19.524 [2024-07-13 22:20:38.914518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.524 [2024-07-13 22:20:38.914698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.524 [2024-07-13 22:20:38.914732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.524 [2024-07-13 22:20:38.914754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.524 [2024-07-13 22:20:38.914772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.524 [2024-07-13 22:20:38.914812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.524 qpair failed and we were unable to recover it. 00:37:19.783 [2024-07-13 22:20:38.924479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.783 [2024-07-13 22:20:38.924649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.784 [2024-07-13 22:20:38.924683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.784 [2024-07-13 22:20:38.924706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.784 [2024-07-13 22:20:38.924725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.784 [2024-07-13 22:20:38.924765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.784 qpair failed and we were unable to recover it. 00:37:19.784 [2024-07-13 22:20:38.934546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.784 [2024-07-13 22:20:38.934727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.784 [2024-07-13 22:20:38.934770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.784 [2024-07-13 22:20:38.934794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.784 [2024-07-13 22:20:38.934814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.784 [2024-07-13 22:20:38.934860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.784 qpair failed and we were unable to recover it. 00:37:19.784 [2024-07-13 22:20:38.944532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.784 [2024-07-13 22:20:38.944707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.784 [2024-07-13 22:20:38.944740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.784 [2024-07-13 22:20:38.944762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.784 [2024-07-13 22:20:38.944781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.784 [2024-07-13 22:20:38.944820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.784 qpair failed and we were unable to recover it. 00:37:19.784 [2024-07-13 22:20:38.954626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.784 [2024-07-13 22:20:38.954830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.784 [2024-07-13 22:20:38.954875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.784 [2024-07-13 22:20:38.954900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.784 [2024-07-13 22:20:38.954919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.784 [2024-07-13 22:20:38.954958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.784 qpair failed and we were unable to recover it. 00:37:19.784 [2024-07-13 22:20:38.964687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.784 [2024-07-13 22:20:38.964893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.784 [2024-07-13 22:20:38.964927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.784 [2024-07-13 22:20:38.964957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.784 [2024-07-13 22:20:38.964977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.784 [2024-07-13 22:20:38.965018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.784 qpair failed and we were unable to recover it. 00:37:19.784 [2024-07-13 22:20:38.974731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.784 [2024-07-13 22:20:38.974916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.784 [2024-07-13 22:20:38.974949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.784 [2024-07-13 22:20:38.974972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.784 [2024-07-13 22:20:38.974990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.784 [2024-07-13 22:20:38.975029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.784 qpair failed and we were unable to recover it. 00:37:19.784 [2024-07-13 22:20:38.984682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.784 [2024-07-13 22:20:38.984856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.784 [2024-07-13 22:20:38.984897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.784 [2024-07-13 22:20:38.984920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.784 [2024-07-13 22:20:38.984939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.784 [2024-07-13 22:20:38.984978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.784 qpair failed and we were unable to recover it. 00:37:19.784 [2024-07-13 22:20:38.994784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.784 [2024-07-13 22:20:38.994987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.784 [2024-07-13 22:20:38.995020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.784 [2024-07-13 22:20:38.995042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.784 [2024-07-13 22:20:38.995060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.784 [2024-07-13 22:20:38.995101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.784 qpair failed and we were unable to recover it. 00:37:19.784 [2024-07-13 22:20:39.004701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.784 [2024-07-13 22:20:39.004872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.784 [2024-07-13 22:20:39.004906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.784 [2024-07-13 22:20:39.004929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.784 [2024-07-13 22:20:39.004947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.784 [2024-07-13 22:20:39.004987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.784 qpair failed and we were unable to recover it. 00:37:19.784 [2024-07-13 22:20:39.014765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.784 [2024-07-13 22:20:39.014953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.784 [2024-07-13 22:20:39.014987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.784 [2024-07-13 22:20:39.015009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.784 [2024-07-13 22:20:39.015027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.784 [2024-07-13 22:20:39.015066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.784 qpair failed and we were unable to recover it. 00:37:19.784 [2024-07-13 22:20:39.024823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.784 [2024-07-13 22:20:39.025018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.784 [2024-07-13 22:20:39.025051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.784 [2024-07-13 22:20:39.025073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.784 [2024-07-13 22:20:39.025092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.784 [2024-07-13 22:20:39.025132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.784 qpair failed and we were unable to recover it. 00:37:19.784 [2024-07-13 22:20:39.034897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.784 [2024-07-13 22:20:39.035094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.784 [2024-07-13 22:20:39.035127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.784 [2024-07-13 22:20:39.035165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.784 [2024-07-13 22:20:39.035182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.784 [2024-07-13 22:20:39.035236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.784 qpair failed and we were unable to recover it. 00:37:19.784 [2024-07-13 22:20:39.044892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.784 [2024-07-13 22:20:39.045062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.784 [2024-07-13 22:20:39.045095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.784 [2024-07-13 22:20:39.045117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.784 [2024-07-13 22:20:39.045136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.784 [2024-07-13 22:20:39.045181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.784 qpair failed and we were unable to recover it. 00:37:19.784 [2024-07-13 22:20:39.054860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.784 [2024-07-13 22:20:39.055051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.784 [2024-07-13 22:20:39.055089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.784 [2024-07-13 22:20:39.055112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.784 [2024-07-13 22:20:39.055130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.784 [2024-07-13 22:20:39.055170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.784 qpair failed and we were unable to recover it. 00:37:19.784 [2024-07-13 22:20:39.064935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.784 [2024-07-13 22:20:39.065108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.785 [2024-07-13 22:20:39.065141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.785 [2024-07-13 22:20:39.065164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.785 [2024-07-13 22:20:39.065182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.785 [2024-07-13 22:20:39.065222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.785 qpair failed and we were unable to recover it. 00:37:19.785 [2024-07-13 22:20:39.074964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.785 [2024-07-13 22:20:39.075148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.785 [2024-07-13 22:20:39.075181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.785 [2024-07-13 22:20:39.075203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.785 [2024-07-13 22:20:39.075221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.785 [2024-07-13 22:20:39.075260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.785 qpair failed and we were unable to recover it. 00:37:19.785 [2024-07-13 22:20:39.084962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.785 [2024-07-13 22:20:39.085136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.785 [2024-07-13 22:20:39.085169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.785 [2024-07-13 22:20:39.085192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.785 [2024-07-13 22:20:39.085211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.785 [2024-07-13 22:20:39.085249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.785 qpair failed and we were unable to recover it. 00:37:19.785 [2024-07-13 22:20:39.095014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.785 [2024-07-13 22:20:39.095215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.785 [2024-07-13 22:20:39.095249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.785 [2024-07-13 22:20:39.095270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.785 [2024-07-13 22:20:39.095288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.785 [2024-07-13 22:20:39.095333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.785 qpair failed and we were unable to recover it. 00:37:19.785 [2024-07-13 22:20:39.105016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.785 [2024-07-13 22:20:39.105195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.785 [2024-07-13 22:20:39.105229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.785 [2024-07-13 22:20:39.105251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.785 [2024-07-13 22:20:39.105269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.785 [2024-07-13 22:20:39.105308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.785 qpair failed and we were unable to recover it. 00:37:19.785 [2024-07-13 22:20:39.115113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.785 [2024-07-13 22:20:39.115300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.785 [2024-07-13 22:20:39.115334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.785 [2024-07-13 22:20:39.115356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.785 [2024-07-13 22:20:39.115374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.785 [2024-07-13 22:20:39.115414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.785 qpair failed and we were unable to recover it. 00:37:19.785 [2024-07-13 22:20:39.125269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.785 [2024-07-13 22:20:39.125510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.785 [2024-07-13 22:20:39.125543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.785 [2024-07-13 22:20:39.125565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.785 [2024-07-13 22:20:39.125584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.785 [2024-07-13 22:20:39.125638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.785 qpair failed and we were unable to recover it. 00:37:19.785 [2024-07-13 22:20:39.135129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.785 [2024-07-13 22:20:39.135311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.785 [2024-07-13 22:20:39.135358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.785 [2024-07-13 22:20:39.135379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.785 [2024-07-13 22:20:39.135397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.785 [2024-07-13 22:20:39.135450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.785 qpair failed and we were unable to recover it. 00:37:19.785 [2024-07-13 22:20:39.145170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.785 [2024-07-13 22:20:39.145349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.785 [2024-07-13 22:20:39.145403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.785 [2024-07-13 22:20:39.145426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.785 [2024-07-13 22:20:39.145444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.785 [2024-07-13 22:20:39.145497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.785 qpair failed and we were unable to recover it. 00:37:19.785 [2024-07-13 22:20:39.155253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.785 [2024-07-13 22:20:39.155442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.785 [2024-07-13 22:20:39.155491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.785 [2024-07-13 22:20:39.155512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.785 [2024-07-13 22:20:39.155529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.785 [2024-07-13 22:20:39.155583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.785 qpair failed and we were unable to recover it. 00:37:19.785 [2024-07-13 22:20:39.165187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.785 [2024-07-13 22:20:39.165361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.785 [2024-07-13 22:20:39.165395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.785 [2024-07-13 22:20:39.165417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.785 [2024-07-13 22:20:39.165435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.785 [2024-07-13 22:20:39.165474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.785 qpair failed and we were unable to recover it. 00:37:19.785 [2024-07-13 22:20:39.175295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:19.785 [2024-07-13 22:20:39.175486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:19.785 [2024-07-13 22:20:39.175520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:19.785 [2024-07-13 22:20:39.175542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:19.785 [2024-07-13 22:20:39.175560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:19.785 [2024-07-13 22:20:39.175600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:19.785 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-13 22:20:39.185239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.044 [2024-07-13 22:20:39.185431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.044 [2024-07-13 22:20:39.185465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.044 [2024-07-13 22:20:39.185487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.044 [2024-07-13 22:20:39.185512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.044 [2024-07-13 22:20:39.185588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-13 22:20:39.195365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.044 [2024-07-13 22:20:39.195558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.044 [2024-07-13 22:20:39.195591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.044 [2024-07-13 22:20:39.195612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.044 [2024-07-13 22:20:39.195630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.044 [2024-07-13 22:20:39.195670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-13 22:20:39.205307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.044 [2024-07-13 22:20:39.205484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.044 [2024-07-13 22:20:39.205516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.044 [2024-07-13 22:20:39.205538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.044 [2024-07-13 22:20:39.205559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.044 [2024-07-13 22:20:39.205598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-13 22:20:39.215360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.044 [2024-07-13 22:20:39.215575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.044 [2024-07-13 22:20:39.215608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.044 [2024-07-13 22:20:39.215630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.044 [2024-07-13 22:20:39.215649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.044 [2024-07-13 22:20:39.215688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-13 22:20:39.225362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.044 [2024-07-13 22:20:39.225551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.044 [2024-07-13 22:20:39.225584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.044 [2024-07-13 22:20:39.225606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.044 [2024-07-13 22:20:39.225624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.044 [2024-07-13 22:20:39.225663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.044 qpair failed and we were unable to recover it. 00:37:20.044 [2024-07-13 22:20:39.235400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.044 [2024-07-13 22:20:39.235594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.045 [2024-07-13 22:20:39.235627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.045 [2024-07-13 22:20:39.235649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.045 [2024-07-13 22:20:39.235667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.045 [2024-07-13 22:20:39.235706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-13 22:20:39.245398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.045 [2024-07-13 22:20:39.245575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.045 [2024-07-13 22:20:39.245614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.045 [2024-07-13 22:20:39.245636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.045 [2024-07-13 22:20:39.245655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.045 [2024-07-13 22:20:39.245694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-13 22:20:39.255453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.045 [2024-07-13 22:20:39.255628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.045 [2024-07-13 22:20:39.255661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.045 [2024-07-13 22:20:39.255683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.045 [2024-07-13 22:20:39.255702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.045 [2024-07-13 22:20:39.255741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-13 22:20:39.265433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.045 [2024-07-13 22:20:39.265603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.045 [2024-07-13 22:20:39.265636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.045 [2024-07-13 22:20:39.265658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.045 [2024-07-13 22:20:39.265677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.045 [2024-07-13 22:20:39.265716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-13 22:20:39.275543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.045 [2024-07-13 22:20:39.275736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.045 [2024-07-13 22:20:39.275787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.045 [2024-07-13 22:20:39.275810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.045 [2024-07-13 22:20:39.275833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.045 [2024-07-13 22:20:39.275897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-13 22:20:39.285586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.045 [2024-07-13 22:20:39.285766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.045 [2024-07-13 22:20:39.285815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.045 [2024-07-13 22:20:39.285836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.045 [2024-07-13 22:20:39.285877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.045 [2024-07-13 22:20:39.285925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-13 22:20:39.295537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.045 [2024-07-13 22:20:39.295713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.045 [2024-07-13 22:20:39.295746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.045 [2024-07-13 22:20:39.295768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.045 [2024-07-13 22:20:39.295786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.045 [2024-07-13 22:20:39.295825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-13 22:20:39.305694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.045 [2024-07-13 22:20:39.305902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.045 [2024-07-13 22:20:39.305935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.045 [2024-07-13 22:20:39.305958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.045 [2024-07-13 22:20:39.305977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.045 [2024-07-13 22:20:39.306017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-13 22:20:39.315646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.045 [2024-07-13 22:20:39.315835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.045 [2024-07-13 22:20:39.315881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.045 [2024-07-13 22:20:39.315907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.045 [2024-07-13 22:20:39.315925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.045 [2024-07-13 22:20:39.315965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-13 22:20:39.325612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.045 [2024-07-13 22:20:39.325773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.045 [2024-07-13 22:20:39.325806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.045 [2024-07-13 22:20:39.325828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.045 [2024-07-13 22:20:39.325857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.045 [2024-07-13 22:20:39.325907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-13 22:20:39.335726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.045 [2024-07-13 22:20:39.335938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.045 [2024-07-13 22:20:39.335972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.045 [2024-07-13 22:20:39.335994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.045 [2024-07-13 22:20:39.336013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.045 [2024-07-13 22:20:39.336052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-13 22:20:39.345697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.045 [2024-07-13 22:20:39.345883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.045 [2024-07-13 22:20:39.345917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.045 [2024-07-13 22:20:39.345939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.045 [2024-07-13 22:20:39.345957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.045 [2024-07-13 22:20:39.345997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-13 22:20:39.355747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.045 [2024-07-13 22:20:39.355947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.045 [2024-07-13 22:20:39.355981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.045 [2024-07-13 22:20:39.356003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.045 [2024-07-13 22:20:39.356021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.045 [2024-07-13 22:20:39.356061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-13 22:20:39.365829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.045 [2024-07-13 22:20:39.366089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.045 [2024-07-13 22:20:39.366122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.045 [2024-07-13 22:20:39.366163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.045 [2024-07-13 22:20:39.366198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.045 [2024-07-13 22:20:39.366251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-13 22:20:39.375771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.045 [2024-07-13 22:20:39.375974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.045 [2024-07-13 22:20:39.376008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.045 [2024-07-13 22:20:39.376030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.045 [2024-07-13 22:20:39.376048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.045 [2024-07-13 22:20:39.376088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.045 qpair failed and we were unable to recover it. 00:37:20.045 [2024-07-13 22:20:39.385942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.045 [2024-07-13 22:20:39.386120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.046 [2024-07-13 22:20:39.386165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.046 [2024-07-13 22:20:39.386187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.046 [2024-07-13 22:20:39.386205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.046 [2024-07-13 22:20:39.386267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-13 22:20:39.395902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.046 [2024-07-13 22:20:39.396092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.046 [2024-07-13 22:20:39.396125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.046 [2024-07-13 22:20:39.396148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.046 [2024-07-13 22:20:39.396169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.046 [2024-07-13 22:20:39.396208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-13 22:20:39.405989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.046 [2024-07-13 22:20:39.406231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.046 [2024-07-13 22:20:39.406264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.046 [2024-07-13 22:20:39.406286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.046 [2024-07-13 22:20:39.406304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.046 [2024-07-13 22:20:39.406358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-13 22:20:39.415946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.046 [2024-07-13 22:20:39.416124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.046 [2024-07-13 22:20:39.416157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.046 [2024-07-13 22:20:39.416182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.046 [2024-07-13 22:20:39.416200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.046 [2024-07-13 22:20:39.416240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-13 22:20:39.426006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.046 [2024-07-13 22:20:39.426197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.046 [2024-07-13 22:20:39.426248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.046 [2024-07-13 22:20:39.426269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.046 [2024-07-13 22:20:39.426286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.046 [2024-07-13 22:20:39.426341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.046 [2024-07-13 22:20:39.436022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.046 [2024-07-13 22:20:39.436207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.046 [2024-07-13 22:20:39.436251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.046 [2024-07-13 22:20:39.436273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.046 [2024-07-13 22:20:39.436298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.046 [2024-07-13 22:20:39.436344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.046 qpair failed and we were unable to recover it. 00:37:20.307 [2024-07-13 22:20:39.446028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.307 [2024-07-13 22:20:39.446223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.307 [2024-07-13 22:20:39.446271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.307 [2024-07-13 22:20:39.446324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.307 [2024-07-13 22:20:39.446356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.307 [2024-07-13 22:20:39.446417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.307 qpair failed and we were unable to recover it. 00:37:20.307 [2024-07-13 22:20:39.456045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.307 [2024-07-13 22:20:39.456225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.307 [2024-07-13 22:20:39.456273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.307 [2024-07-13 22:20:39.456296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.307 [2024-07-13 22:20:39.456315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.307 [2024-07-13 22:20:39.456355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.307 qpair failed and we were unable to recover it. 00:37:20.307 [2024-07-13 22:20:39.466056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.307 [2024-07-13 22:20:39.466232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.307 [2024-07-13 22:20:39.466266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.307 [2024-07-13 22:20:39.466289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.307 [2024-07-13 22:20:39.466307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.307 [2024-07-13 22:20:39.466347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.307 qpair failed and we were unable to recover it. 00:37:20.307 [2024-07-13 22:20:39.476077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.307 [2024-07-13 22:20:39.476266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.307 [2024-07-13 22:20:39.476300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.307 [2024-07-13 22:20:39.476322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.307 [2024-07-13 22:20:39.476344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.307 [2024-07-13 22:20:39.476383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.307 qpair failed and we were unable to recover it. 00:37:20.307 [2024-07-13 22:20:39.486093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.307 [2024-07-13 22:20:39.486289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.307 [2024-07-13 22:20:39.486321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.307 [2024-07-13 22:20:39.486343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.307 [2024-07-13 22:20:39.486362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.307 [2024-07-13 22:20:39.486401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.307 qpair failed and we were unable to recover it. 00:37:20.307 [2024-07-13 22:20:39.496135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.307 [2024-07-13 22:20:39.496328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.307 [2024-07-13 22:20:39.496361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.307 [2024-07-13 22:20:39.496383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.307 [2024-07-13 22:20:39.496401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.307 [2024-07-13 22:20:39.496447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.307 qpair failed and we were unable to recover it. 00:37:20.307 [2024-07-13 22:20:39.506223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.307 [2024-07-13 22:20:39.506414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.307 [2024-07-13 22:20:39.506462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.307 [2024-07-13 22:20:39.506484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.307 [2024-07-13 22:20:39.506502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.307 [2024-07-13 22:20:39.506556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.307 qpair failed and we were unable to recover it. 00:37:20.307 [2024-07-13 22:20:39.516277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.308 [2024-07-13 22:20:39.516501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.308 [2024-07-13 22:20:39.516552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.308 [2024-07-13 22:20:39.516574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.308 [2024-07-13 22:20:39.516593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.308 [2024-07-13 22:20:39.516647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.308 qpair failed and we were unable to recover it. 00:37:20.308 [2024-07-13 22:20:39.526253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.308 [2024-07-13 22:20:39.526454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.308 [2024-07-13 22:20:39.526487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.308 [2024-07-13 22:20:39.526509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.308 [2024-07-13 22:20:39.526528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.308 [2024-07-13 22:20:39.526568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.308 qpair failed and we were unable to recover it. 00:37:20.308 [2024-07-13 22:20:39.536234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.308 [2024-07-13 22:20:39.536459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.308 [2024-07-13 22:20:39.536492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.308 [2024-07-13 22:20:39.536515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.308 [2024-07-13 22:20:39.536533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.308 [2024-07-13 22:20:39.536572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.308 qpair failed and we were unable to recover it. 00:37:20.308 [2024-07-13 22:20:39.546291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.308 [2024-07-13 22:20:39.546477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.308 [2024-07-13 22:20:39.546516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.308 [2024-07-13 22:20:39.546540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.308 [2024-07-13 22:20:39.546559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.308 [2024-07-13 22:20:39.546598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.308 qpair failed and we were unable to recover it. 00:37:20.308 [2024-07-13 22:20:39.556360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.308 [2024-07-13 22:20:39.556592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.308 [2024-07-13 22:20:39.556626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.308 [2024-07-13 22:20:39.556648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.308 [2024-07-13 22:20:39.556666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.308 [2024-07-13 22:20:39.556705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.308 qpair failed and we were unable to recover it. 00:37:20.308 [2024-07-13 22:20:39.566303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.308 [2024-07-13 22:20:39.566483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.308 [2024-07-13 22:20:39.566520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.308 [2024-07-13 22:20:39.566543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.308 [2024-07-13 22:20:39.566562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.308 [2024-07-13 22:20:39.566602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.308 qpair failed and we were unable to recover it. 00:37:20.308 [2024-07-13 22:20:39.576380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.308 [2024-07-13 22:20:39.576559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.308 [2024-07-13 22:20:39.576594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.308 [2024-07-13 22:20:39.576615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.308 [2024-07-13 22:20:39.576633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.308 [2024-07-13 22:20:39.576672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.308 qpair failed and we were unable to recover it. 00:37:20.308 [2024-07-13 22:20:39.586370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.308 [2024-07-13 22:20:39.586547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.308 [2024-07-13 22:20:39.586580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.308 [2024-07-13 22:20:39.586602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.308 [2024-07-13 22:20:39.586621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.308 [2024-07-13 22:20:39.586666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.308 qpair failed and we were unable to recover it. 00:37:20.308 [2024-07-13 22:20:39.596445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.308 [2024-07-13 22:20:39.596629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.308 [2024-07-13 22:20:39.596662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.308 [2024-07-13 22:20:39.596684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.308 [2024-07-13 22:20:39.596701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.308 [2024-07-13 22:20:39.596741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.308 qpair failed and we were unable to recover it. 00:37:20.308 [2024-07-13 22:20:39.606448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.308 [2024-07-13 22:20:39.606622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.308 [2024-07-13 22:20:39.606655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.308 [2024-07-13 22:20:39.606677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.308 [2024-07-13 22:20:39.606696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.308 [2024-07-13 22:20:39.606735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.308 qpair failed and we were unable to recover it. 00:37:20.308 [2024-07-13 22:20:39.616465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.308 [2024-07-13 22:20:39.616665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.308 [2024-07-13 22:20:39.616698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.308 [2024-07-13 22:20:39.616720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.308 [2024-07-13 22:20:39.616739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.308 [2024-07-13 22:20:39.616778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.308 qpair failed and we were unable to recover it. 00:37:20.308 [2024-07-13 22:20:39.626597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.308 [2024-07-13 22:20:39.626777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.308 [2024-07-13 22:20:39.626810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.309 [2024-07-13 22:20:39.626832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.309 [2024-07-13 22:20:39.626851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.309 [2024-07-13 22:20:39.626901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.309 qpair failed and we were unable to recover it. 00:37:20.309 [2024-07-13 22:20:39.636605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.309 [2024-07-13 22:20:39.636789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.309 [2024-07-13 22:20:39.636827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.309 [2024-07-13 22:20:39.636849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.309 [2024-07-13 22:20:39.636873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.309 [2024-07-13 22:20:39.636916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.309 qpair failed and we were unable to recover it. 00:37:20.309 [2024-07-13 22:20:39.646622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.309 [2024-07-13 22:20:39.646809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.309 [2024-07-13 22:20:39.646841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.309 [2024-07-13 22:20:39.646888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.309 [2024-07-13 22:20:39.646908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.309 [2024-07-13 22:20:39.646948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.309 qpair failed and we were unable to recover it. 00:37:20.309 [2024-07-13 22:20:39.656654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.309 [2024-07-13 22:20:39.656833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.309 [2024-07-13 22:20:39.656874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.309 [2024-07-13 22:20:39.656902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.309 [2024-07-13 22:20:39.656922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.309 [2024-07-13 22:20:39.656962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.309 qpair failed and we were unable to recover it. 00:37:20.309 [2024-07-13 22:20:39.666619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.309 [2024-07-13 22:20:39.666836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.309 [2024-07-13 22:20:39.666875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.309 [2024-07-13 22:20:39.666900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.309 [2024-07-13 22:20:39.666918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.309 [2024-07-13 22:20:39.666958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.309 qpair failed and we were unable to recover it. 00:37:20.309 [2024-07-13 22:20:39.676668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.309 [2024-07-13 22:20:39.676851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.309 [2024-07-13 22:20:39.676893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.309 [2024-07-13 22:20:39.676916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.309 [2024-07-13 22:20:39.676940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.309 [2024-07-13 22:20:39.676979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.309 qpair failed and we were unable to recover it. 00:37:20.309 [2024-07-13 22:20:39.686678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.309 [2024-07-13 22:20:39.686875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.309 [2024-07-13 22:20:39.686908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.309 [2024-07-13 22:20:39.686930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.309 [2024-07-13 22:20:39.686949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.309 [2024-07-13 22:20:39.686989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.309 qpair failed and we were unable to recover it. 00:37:20.309 [2024-07-13 22:20:39.696711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.309 [2024-07-13 22:20:39.696900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.309 [2024-07-13 22:20:39.696934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.309 [2024-07-13 22:20:39.696956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.309 [2024-07-13 22:20:39.696974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.309 [2024-07-13 22:20:39.697014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.309 qpair failed and we were unable to recover it. 00:37:20.571 [2024-07-13 22:20:39.706770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.571 [2024-07-13 22:20:39.706966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.571 [2024-07-13 22:20:39.707001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.571 [2024-07-13 22:20:39.707024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.571 [2024-07-13 22:20:39.707043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.571 [2024-07-13 22:20:39.707083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.571 qpair failed and we were unable to recover it. 00:37:20.571 [2024-07-13 22:20:39.716858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.571 [2024-07-13 22:20:39.717110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.571 [2024-07-13 22:20:39.717143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.571 [2024-07-13 22:20:39.717179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.571 [2024-07-13 22:20:39.717196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.571 [2024-07-13 22:20:39.717249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.571 qpair failed and we were unable to recover it. 00:37:20.571 [2024-07-13 22:20:39.726859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.571 [2024-07-13 22:20:39.727046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.571 [2024-07-13 22:20:39.727085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.571 [2024-07-13 22:20:39.727109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.571 [2024-07-13 22:20:39.727127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.571 [2024-07-13 22:20:39.727182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.571 qpair failed and we were unable to recover it. 00:37:20.571 [2024-07-13 22:20:39.736833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.571 [2024-07-13 22:20:39.737022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.571 [2024-07-13 22:20:39.737056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.571 [2024-07-13 22:20:39.737078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.571 [2024-07-13 22:20:39.737096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.571 [2024-07-13 22:20:39.737135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.571 qpair failed and we were unable to recover it. 00:37:20.571 [2024-07-13 22:20:39.746824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.571 [2024-07-13 22:20:39.747005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.572 [2024-07-13 22:20:39.747038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.572 [2024-07-13 22:20:39.747060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.572 [2024-07-13 22:20:39.747079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.572 [2024-07-13 22:20:39.747118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.572 qpair failed and we were unable to recover it. 00:37:20.572 [2024-07-13 22:20:39.756915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.572 [2024-07-13 22:20:39.757109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.572 [2024-07-13 22:20:39.757143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.572 [2024-07-13 22:20:39.757165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.572 [2024-07-13 22:20:39.757183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.572 [2024-07-13 22:20:39.757222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.572 qpair failed and we were unable to recover it. 00:37:20.572 [2024-07-13 22:20:39.766925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.572 [2024-07-13 22:20:39.767108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.572 [2024-07-13 22:20:39.767142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.572 [2024-07-13 22:20:39.767170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.572 [2024-07-13 22:20:39.767190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.572 [2024-07-13 22:20:39.767230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.572 qpair failed and we were unable to recover it. 00:37:20.572 [2024-07-13 22:20:39.776941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.572 [2024-07-13 22:20:39.777157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.572 [2024-07-13 22:20:39.777191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.572 [2024-07-13 22:20:39.777213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.572 [2024-07-13 22:20:39.777231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.572 [2024-07-13 22:20:39.777270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.572 qpair failed and we were unable to recover it. 00:37:20.572 [2024-07-13 22:20:39.786962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.572 [2024-07-13 22:20:39.787139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.572 [2024-07-13 22:20:39.787172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.572 [2024-07-13 22:20:39.787194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.572 [2024-07-13 22:20:39.787212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.572 [2024-07-13 22:20:39.787251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.572 qpair failed and we were unable to recover it. 00:37:20.572 [2024-07-13 22:20:39.797026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.572 [2024-07-13 22:20:39.797204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.572 [2024-07-13 22:20:39.797237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.572 [2024-07-13 22:20:39.797259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.572 [2024-07-13 22:20:39.797280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.572 [2024-07-13 22:20:39.797320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.572 qpair failed and we were unable to recover it. 00:37:20.572 [2024-07-13 22:20:39.806974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.572 [2024-07-13 22:20:39.807143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.572 [2024-07-13 22:20:39.807176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.572 [2024-07-13 22:20:39.807199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.572 [2024-07-13 22:20:39.807217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.572 [2024-07-13 22:20:39.807256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.572 qpair failed and we were unable to recover it. 00:37:20.572 [2024-07-13 22:20:39.817094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.572 [2024-07-13 22:20:39.817320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.572 [2024-07-13 22:20:39.817353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.572 [2024-07-13 22:20:39.817374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.572 [2024-07-13 22:20:39.817393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.572 [2024-07-13 22:20:39.817432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.572 qpair failed and we were unable to recover it. 00:37:20.572 [2024-07-13 22:20:39.827069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.572 [2024-07-13 22:20:39.827258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.572 [2024-07-13 22:20:39.827291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.572 [2024-07-13 22:20:39.827312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.572 [2024-07-13 22:20:39.827331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.572 [2024-07-13 22:20:39.827379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.572 qpair failed and we were unable to recover it. 00:37:20.572 [2024-07-13 22:20:39.837167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.572 [2024-07-13 22:20:39.837370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.572 [2024-07-13 22:20:39.837404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.572 [2024-07-13 22:20:39.837426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.572 [2024-07-13 22:20:39.837444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.572 [2024-07-13 22:20:39.837483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.572 qpair failed and we were unable to recover it. 00:37:20.572 [2024-07-13 22:20:39.847133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.572 [2024-07-13 22:20:39.847306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.572 [2024-07-13 22:20:39.847339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.572 [2024-07-13 22:20:39.847361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.572 [2024-07-13 22:20:39.847380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.572 [2024-07-13 22:20:39.847419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.572 qpair failed and we were unable to recover it. 00:37:20.572 [2024-07-13 22:20:39.857182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.572 [2024-07-13 22:20:39.857368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.572 [2024-07-13 22:20:39.857401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.572 [2024-07-13 22:20:39.857429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.572 [2024-07-13 22:20:39.857448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.572 [2024-07-13 22:20:39.857488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.572 qpair failed and we were unable to recover it. 00:37:20.572 [2024-07-13 22:20:39.867207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.572 [2024-07-13 22:20:39.867376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.572 [2024-07-13 22:20:39.867410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.572 [2024-07-13 22:20:39.867447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.572 [2024-07-13 22:20:39.867466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.572 [2024-07-13 22:20:39.867520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.572 qpair failed and we were unable to recover it. 00:37:20.572 [2024-07-13 22:20:39.877235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.572 [2024-07-13 22:20:39.877445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.572 [2024-07-13 22:20:39.877478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.572 [2024-07-13 22:20:39.877501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.572 [2024-07-13 22:20:39.877518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.572 [2024-07-13 22:20:39.877558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.572 qpair failed and we were unable to recover it. 00:37:20.572 [2024-07-13 22:20:39.887188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.572 [2024-07-13 22:20:39.887398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.572 [2024-07-13 22:20:39.887431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.573 [2024-07-13 22:20:39.887453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.573 [2024-07-13 22:20:39.887472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.573 [2024-07-13 22:20:39.887510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.573 qpair failed and we were unable to recover it. 00:37:20.573 [2024-07-13 22:20:39.897343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.573 [2024-07-13 22:20:39.897524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.573 [2024-07-13 22:20:39.897558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.573 [2024-07-13 22:20:39.897595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.573 [2024-07-13 22:20:39.897613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.573 [2024-07-13 22:20:39.897667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.573 qpair failed and we were unable to recover it. 00:37:20.573 [2024-07-13 22:20:39.907282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.573 [2024-07-13 22:20:39.907462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.573 [2024-07-13 22:20:39.907496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.573 [2024-07-13 22:20:39.907519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.573 [2024-07-13 22:20:39.907537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.573 [2024-07-13 22:20:39.907577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.573 qpair failed and we were unable to recover it. 00:37:20.573 [2024-07-13 22:20:39.917357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.573 [2024-07-13 22:20:39.917588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.573 [2024-07-13 22:20:39.917621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.573 [2024-07-13 22:20:39.917643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.573 [2024-07-13 22:20:39.917661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.573 [2024-07-13 22:20:39.917701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.573 qpair failed and we were unable to recover it. 00:37:20.573 [2024-07-13 22:20:39.927426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.573 [2024-07-13 22:20:39.927603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.573 [2024-07-13 22:20:39.927653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.573 [2024-07-13 22:20:39.927675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.573 [2024-07-13 22:20:39.927692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.573 [2024-07-13 22:20:39.927745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.573 qpair failed and we were unable to recover it. 00:37:20.573 [2024-07-13 22:20:39.937411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.573 [2024-07-13 22:20:39.937598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.573 [2024-07-13 22:20:39.937649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.573 [2024-07-13 22:20:39.937672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.573 [2024-07-13 22:20:39.937690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.573 [2024-07-13 22:20:39.937743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.573 qpair failed and we were unable to recover it. 00:37:20.573 [2024-07-13 22:20:39.947486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.573 [2024-07-13 22:20:39.947670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.573 [2024-07-13 22:20:39.947727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.573 [2024-07-13 22:20:39.947755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.573 [2024-07-13 22:20:39.947774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.573 [2024-07-13 22:20:39.947828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.573 qpair failed and we were unable to recover it. 00:37:20.573 [2024-07-13 22:20:39.957472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.573 [2024-07-13 22:20:39.957654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.573 [2024-07-13 22:20:39.957687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.573 [2024-07-13 22:20:39.957709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.573 [2024-07-13 22:20:39.957742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.573 [2024-07-13 22:20:39.957783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.573 qpair failed and we were unable to recover it. 00:37:20.835 [2024-07-13 22:20:39.967473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.835 [2024-07-13 22:20:39.967648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.835 [2024-07-13 22:20:39.967682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.835 [2024-07-13 22:20:39.967720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.835 [2024-07-13 22:20:39.967739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.835 [2024-07-13 22:20:39.967792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.835 qpair failed and we were unable to recover it. 00:37:20.835 [2024-07-13 22:20:39.977560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.835 [2024-07-13 22:20:39.977767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.835 [2024-07-13 22:20:39.977799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.835 [2024-07-13 22:20:39.977822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.835 [2024-07-13 22:20:39.977840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.835 [2024-07-13 22:20:39.977904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.835 qpair failed and we were unable to recover it. 00:37:20.835 [2024-07-13 22:20:39.987500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.835 [2024-07-13 22:20:39.987684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.835 [2024-07-13 22:20:39.987717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.835 [2024-07-13 22:20:39.987739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.835 [2024-07-13 22:20:39.987757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.835 [2024-07-13 22:20:39.987802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.835 qpair failed and we were unable to recover it. 00:37:20.835 [2024-07-13 22:20:39.997658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.835 [2024-07-13 22:20:39.997860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.835 [2024-07-13 22:20:39.997901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.835 [2024-07-13 22:20:39.997939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.835 [2024-07-13 22:20:39.997961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.835 [2024-07-13 22:20:39.998000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.835 qpair failed and we were unable to recover it. 00:37:20.835 [2024-07-13 22:20:40.007629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.835 [2024-07-13 22:20:40.007803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.835 [2024-07-13 22:20:40.007839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.835 [2024-07-13 22:20:40.007863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.835 [2024-07-13 22:20:40.007896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.835 [2024-07-13 22:20:40.007940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.835 qpair failed and we were unable to recover it. 00:37:20.835 [2024-07-13 22:20:40.017674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.836 [2024-07-13 22:20:40.017854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.836 [2024-07-13 22:20:40.017899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.836 [2024-07-13 22:20:40.017923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.836 [2024-07-13 22:20:40.017942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.836 [2024-07-13 22:20:40.017987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.836 qpair failed and we were unable to recover it. 00:37:20.836 [2024-07-13 22:20:40.028102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.836 [2024-07-13 22:20:40.028303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.836 [2024-07-13 22:20:40.028344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.836 [2024-07-13 22:20:40.028367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.836 [2024-07-13 22:20:40.028386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.836 [2024-07-13 22:20:40.028428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.836 qpair failed and we were unable to recover it. 00:37:20.836 [2024-07-13 22:20:40.037731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.836 [2024-07-13 22:20:40.037927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.836 [2024-07-13 22:20:40.037969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.836 [2024-07-13 22:20:40.037993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.836 [2024-07-13 22:20:40.038014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.836 [2024-07-13 22:20:40.038056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.836 qpair failed and we were unable to recover it. 00:37:20.836 [2024-07-13 22:20:40.047728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.836 [2024-07-13 22:20:40.047907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.836 [2024-07-13 22:20:40.047942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.836 [2024-07-13 22:20:40.047965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.836 [2024-07-13 22:20:40.047984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.836 [2024-07-13 22:20:40.048024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.836 qpair failed and we were unable to recover it. 00:37:20.836 [2024-07-13 22:20:40.057980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.836 [2024-07-13 22:20:40.058193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.836 [2024-07-13 22:20:40.058233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.836 [2024-07-13 22:20:40.058256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.836 [2024-07-13 22:20:40.058276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.836 [2024-07-13 22:20:40.058326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.836 qpair failed and we were unable to recover it. 00:37:20.836 [2024-07-13 22:20:40.067896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.836 [2024-07-13 22:20:40.068138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.836 [2024-07-13 22:20:40.068175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.836 [2024-07-13 22:20:40.068200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.836 [2024-07-13 22:20:40.068219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.836 [2024-07-13 22:20:40.068264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.836 qpair failed and we were unable to recover it. 00:37:20.836 [2024-07-13 22:20:40.077871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.836 [2024-07-13 22:20:40.078086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.836 [2024-07-13 22:20:40.078120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.836 [2024-07-13 22:20:40.078142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.836 [2024-07-13 22:20:40.078169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.836 [2024-07-13 22:20:40.078212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.836 qpair failed and we were unable to recover it. 00:37:20.836 [2024-07-13 22:20:40.087962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.836 [2024-07-13 22:20:40.088142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.836 [2024-07-13 22:20:40.088176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.836 [2024-07-13 22:20:40.088198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.836 [2024-07-13 22:20:40.088218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.836 [2024-07-13 22:20:40.088258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.836 qpair failed and we were unable to recover it. 00:37:20.836 [2024-07-13 22:20:40.097833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.836 [2024-07-13 22:20:40.098020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.836 [2024-07-13 22:20:40.098053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.836 [2024-07-13 22:20:40.098075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.836 [2024-07-13 22:20:40.098094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.836 [2024-07-13 22:20:40.098134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.836 qpair failed and we were unable to recover it. 00:37:20.836 [2024-07-13 22:20:40.107928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.836 [2024-07-13 22:20:40.108126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.836 [2024-07-13 22:20:40.108160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.836 [2024-07-13 22:20:40.108182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.836 [2024-07-13 22:20:40.108200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.836 [2024-07-13 22:20:40.108240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.836 qpair failed and we were unable to recover it. 00:37:20.836 [2024-07-13 22:20:40.117939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.836 [2024-07-13 22:20:40.118163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.836 [2024-07-13 22:20:40.118196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.836 [2024-07-13 22:20:40.118219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.836 [2024-07-13 22:20:40.118237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.836 [2024-07-13 22:20:40.118278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.836 qpair failed and we were unable to recover it. 00:37:20.836 [2024-07-13 22:20:40.127907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.836 [2024-07-13 22:20:40.128128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.836 [2024-07-13 22:20:40.128161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.836 [2024-07-13 22:20:40.128184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.836 [2024-07-13 22:20:40.128202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.836 [2024-07-13 22:20:40.128242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.836 qpair failed and we were unable to recover it. 00:37:20.836 [2024-07-13 22:20:40.137988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.836 [2024-07-13 22:20:40.138164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.836 [2024-07-13 22:20:40.138197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.836 [2024-07-13 22:20:40.138219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.836 [2024-07-13 22:20:40.138238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.836 [2024-07-13 22:20:40.138278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.836 qpair failed and we were unable to recover it. 00:37:20.836 [2024-07-13 22:20:40.147973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.836 [2024-07-13 22:20:40.148200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.836 [2024-07-13 22:20:40.148233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.836 [2024-07-13 22:20:40.148256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.836 [2024-07-13 22:20:40.148274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.836 [2024-07-13 22:20:40.148313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.836 qpair failed and we were unable to recover it. 00:37:20.836 [2024-07-13 22:20:40.158084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.836 [2024-07-13 22:20:40.158287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.837 [2024-07-13 22:20:40.158320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.837 [2024-07-13 22:20:40.158342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.837 [2024-07-13 22:20:40.158360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.837 [2024-07-13 22:20:40.158401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.837 qpair failed and we were unable to recover it. 00:37:20.837 [2024-07-13 22:20:40.168059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.837 [2024-07-13 22:20:40.168238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.837 [2024-07-13 22:20:40.168272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.837 [2024-07-13 22:20:40.168300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.837 [2024-07-13 22:20:40.168320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.837 [2024-07-13 22:20:40.168360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.837 qpair failed and we were unable to recover it. 00:37:20.837 [2024-07-13 22:20:40.178117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.837 [2024-07-13 22:20:40.178304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.837 [2024-07-13 22:20:40.178336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.837 [2024-07-13 22:20:40.178358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.837 [2024-07-13 22:20:40.178376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.837 [2024-07-13 22:20:40.178415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.837 qpair failed and we were unable to recover it. 00:37:20.837 [2024-07-13 22:20:40.188092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.837 [2024-07-13 22:20:40.188270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.837 [2024-07-13 22:20:40.188304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.837 [2024-07-13 22:20:40.188326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.837 [2024-07-13 22:20:40.188345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.837 [2024-07-13 22:20:40.188384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.837 qpair failed and we were unable to recover it. 00:37:20.837 [2024-07-13 22:20:40.198177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.837 [2024-07-13 22:20:40.198385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.837 [2024-07-13 22:20:40.198418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.837 [2024-07-13 22:20:40.198440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.837 [2024-07-13 22:20:40.198457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.837 [2024-07-13 22:20:40.198498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.837 qpair failed and we were unable to recover it. 00:37:20.837 [2024-07-13 22:20:40.208133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.837 [2024-07-13 22:20:40.208305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.837 [2024-07-13 22:20:40.208354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.837 [2024-07-13 22:20:40.208376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.837 [2024-07-13 22:20:40.208394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.837 [2024-07-13 22:20:40.208448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.837 qpair failed and we were unable to recover it. 00:37:20.837 [2024-07-13 22:20:40.218245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:20.837 [2024-07-13 22:20:40.218476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:20.837 [2024-07-13 22:20:40.218521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:20.837 [2024-07-13 22:20:40.218544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:20.837 [2024-07-13 22:20:40.218564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:20.837 [2024-07-13 22:20:40.218629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:20.837 qpair failed and we were unable to recover it. 00:37:21.097 [2024-07-13 22:20:40.228253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.097 [2024-07-13 22:20:40.228437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.097 [2024-07-13 22:20:40.228472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.097 [2024-07-13 22:20:40.228494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.097 [2024-07-13 22:20:40.228513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.097 [2024-07-13 22:20:40.228552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.097 qpair failed and we were unable to recover it. 00:37:21.097 [2024-07-13 22:20:40.238285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.097 [2024-07-13 22:20:40.238519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.097 [2024-07-13 22:20:40.238554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.097 [2024-07-13 22:20:40.238576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.097 [2024-07-13 22:20:40.238593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.097 [2024-07-13 22:20:40.238648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.097 qpair failed and we were unable to recover it. 00:37:21.097 [2024-07-13 22:20:40.248288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.097 [2024-07-13 22:20:40.248465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.097 [2024-07-13 22:20:40.248499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.097 [2024-07-13 22:20:40.248538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.097 [2024-07-13 22:20:40.248556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.097 [2024-07-13 22:20:40.248608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.097 qpair failed and we were unable to recover it. 00:37:21.097 [2024-07-13 22:20:40.258554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.097 [2024-07-13 22:20:40.258740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.097 [2024-07-13 22:20:40.258789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.097 [2024-07-13 22:20:40.258817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.097 [2024-07-13 22:20:40.258836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.097 [2024-07-13 22:20:40.258899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.097 qpair failed and we were unable to recover it. 00:37:21.097 [2024-07-13 22:20:40.268383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.097 [2024-07-13 22:20:40.268564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.097 [2024-07-13 22:20:40.268597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.097 [2024-07-13 22:20:40.268620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.097 [2024-07-13 22:20:40.268638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.097 [2024-07-13 22:20:40.268677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.097 qpair failed and we were unable to recover it. 00:37:21.097 [2024-07-13 22:20:40.278381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.097 [2024-07-13 22:20:40.278570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.097 [2024-07-13 22:20:40.278603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.097 [2024-07-13 22:20:40.278626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.097 [2024-07-13 22:20:40.278643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.097 [2024-07-13 22:20:40.278683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.097 qpair failed and we were unable to recover it. 00:37:21.097 [2024-07-13 22:20:40.288362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.097 [2024-07-13 22:20:40.288542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.097 [2024-07-13 22:20:40.288575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.097 [2024-07-13 22:20:40.288597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.097 [2024-07-13 22:20:40.288616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.097 [2024-07-13 22:20:40.288655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.097 qpair failed and we were unable to recover it. 00:37:21.097 [2024-07-13 22:20:40.298445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.097 [2024-07-13 22:20:40.298670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.097 [2024-07-13 22:20:40.298703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.097 [2024-07-13 22:20:40.298725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.097 [2024-07-13 22:20:40.298743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.097 [2024-07-13 22:20:40.298783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.097 qpair failed and we were unable to recover it. 00:37:21.097 [2024-07-13 22:20:40.308465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.097 [2024-07-13 22:20:40.308651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.097 [2024-07-13 22:20:40.308699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.097 [2024-07-13 22:20:40.308721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.097 [2024-07-13 22:20:40.308739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.097 [2024-07-13 22:20:40.308793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.097 qpair failed and we were unable to recover it. 00:37:21.097 [2024-07-13 22:20:40.318543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.097 [2024-07-13 22:20:40.318737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.097 [2024-07-13 22:20:40.318784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.097 [2024-07-13 22:20:40.318806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.097 [2024-07-13 22:20:40.318822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.097 [2024-07-13 22:20:40.318885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.097 qpair failed and we were unable to recover it. 00:37:21.098 [2024-07-13 22:20:40.328504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.098 [2024-07-13 22:20:40.328675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.098 [2024-07-13 22:20:40.328708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.098 [2024-07-13 22:20:40.328730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.098 [2024-07-13 22:20:40.328748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.098 [2024-07-13 22:20:40.328787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-07-13 22:20:40.338513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.098 [2024-07-13 22:20:40.338693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.098 [2024-07-13 22:20:40.338726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.098 [2024-07-13 22:20:40.338749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.098 [2024-07-13 22:20:40.338767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.098 [2024-07-13 22:20:40.338806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-07-13 22:20:40.348612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.098 [2024-07-13 22:20:40.348788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.098 [2024-07-13 22:20:40.348827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.098 [2024-07-13 22:20:40.348850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.098 [2024-07-13 22:20:40.348875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.098 [2024-07-13 22:20:40.348918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-07-13 22:20:40.358641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.098 [2024-07-13 22:20:40.358873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.098 [2024-07-13 22:20:40.358906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.098 [2024-07-13 22:20:40.358928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.098 [2024-07-13 22:20:40.358945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.098 [2024-07-13 22:20:40.358986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-07-13 22:20:40.368779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.098 [2024-07-13 22:20:40.368993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.098 [2024-07-13 22:20:40.369026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.098 [2024-07-13 22:20:40.369049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.098 [2024-07-13 22:20:40.369068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.098 [2024-07-13 22:20:40.369107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-07-13 22:20:40.378668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.098 [2024-07-13 22:20:40.378841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.098 [2024-07-13 22:20:40.378881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.098 [2024-07-13 22:20:40.378906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.098 [2024-07-13 22:20:40.378925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.098 [2024-07-13 22:20:40.378965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-07-13 22:20:40.388820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.098 [2024-07-13 22:20:40.389057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.098 [2024-07-13 22:20:40.389090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.098 [2024-07-13 22:20:40.389112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.098 [2024-07-13 22:20:40.389130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.098 [2024-07-13 22:20:40.389177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-07-13 22:20:40.398886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.098 [2024-07-13 22:20:40.399112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.098 [2024-07-13 22:20:40.399146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.098 [2024-07-13 22:20:40.399168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.098 [2024-07-13 22:20:40.399186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.098 [2024-07-13 22:20:40.399225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-07-13 22:20:40.408744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.098 [2024-07-13 22:20:40.408929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.098 [2024-07-13 22:20:40.408962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.098 [2024-07-13 22:20:40.408984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.098 [2024-07-13 22:20:40.409002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.098 [2024-07-13 22:20:40.409041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-07-13 22:20:40.418775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.098 [2024-07-13 22:20:40.418987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.098 [2024-07-13 22:20:40.419026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.098 [2024-07-13 22:20:40.419048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.098 [2024-07-13 22:20:40.419067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.098 [2024-07-13 22:20:40.419106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-07-13 22:20:40.428800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.098 [2024-07-13 22:20:40.428986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.098 [2024-07-13 22:20:40.429019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.098 [2024-07-13 22:20:40.429041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.098 [2024-07-13 22:20:40.429060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.098 [2024-07-13 22:20:40.429099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-07-13 22:20:40.438876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.098 [2024-07-13 22:20:40.439125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.098 [2024-07-13 22:20:40.439163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.098 [2024-07-13 22:20:40.439187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.098 [2024-07-13 22:20:40.439223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.098 [2024-07-13 22:20:40.439262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-07-13 22:20:40.448833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.098 [2024-07-13 22:20:40.449022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.098 [2024-07-13 22:20:40.449056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.098 [2024-07-13 22:20:40.449078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.098 [2024-07-13 22:20:40.449097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.098 [2024-07-13 22:20:40.449136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-07-13 22:20:40.458996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.098 [2024-07-13 22:20:40.459206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.098 [2024-07-13 22:20:40.459240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.098 [2024-07-13 22:20:40.459262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.098 [2024-07-13 22:20:40.459280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.098 [2024-07-13 22:20:40.459336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.098 qpair failed and we were unable to recover it. 00:37:21.098 [2024-07-13 22:20:40.468884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.098 [2024-07-13 22:20:40.469086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.098 [2024-07-13 22:20:40.469120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.099 [2024-07-13 22:20:40.469143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.099 [2024-07-13 22:20:40.469161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.099 [2024-07-13 22:20:40.469215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-07-13 22:20:40.478987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.099 [2024-07-13 22:20:40.479238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.099 [2024-07-13 22:20:40.479271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.099 [2024-07-13 22:20:40.479291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.099 [2024-07-13 22:20:40.479314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.099 [2024-07-13 22:20:40.479368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.099 [2024-07-13 22:20:40.489037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.099 [2024-07-13 22:20:40.489226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.099 [2024-07-13 22:20:40.489279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.099 [2024-07-13 22:20:40.489317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.099 [2024-07-13 22:20:40.489338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.099 [2024-07-13 22:20:40.489393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.099 qpair failed and we were unable to recover it. 00:37:21.358 [2024-07-13 22:20:40.498997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.358 [2024-07-13 22:20:40.499180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.358 [2024-07-13 22:20:40.499214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.358 [2024-07-13 22:20:40.499236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.358 [2024-07-13 22:20:40.499255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.358 [2024-07-13 22:20:40.499295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.358 qpair failed and we were unable to recover it. 00:37:21.358 [2024-07-13 22:20:40.509019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.358 [2024-07-13 22:20:40.509200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.358 [2024-07-13 22:20:40.509233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.358 [2024-07-13 22:20:40.509256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.358 [2024-07-13 22:20:40.509275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.358 [2024-07-13 22:20:40.509314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.358 qpair failed and we were unable to recover it. 00:37:21.358 [2024-07-13 22:20:40.519058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.358 [2024-07-13 22:20:40.519262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.359 [2024-07-13 22:20:40.519296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.359 [2024-07-13 22:20:40.519317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.359 [2024-07-13 22:20:40.519335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.359 [2024-07-13 22:20:40.519374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.359 qpair failed and we were unable to recover it. 00:37:21.359 [2024-07-13 22:20:40.529028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.359 [2024-07-13 22:20:40.529218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.359 [2024-07-13 22:20:40.529251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.359 [2024-07-13 22:20:40.529273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.359 [2024-07-13 22:20:40.529292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.359 [2024-07-13 22:20:40.529331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.359 qpair failed and we were unable to recover it. 00:37:21.359 [2024-07-13 22:20:40.539135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.359 [2024-07-13 22:20:40.539316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.359 [2024-07-13 22:20:40.539349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.359 [2024-07-13 22:20:40.539370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.359 [2024-07-13 22:20:40.539388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.359 [2024-07-13 22:20:40.539427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.359 qpair failed and we were unable to recover it. 00:37:21.359 [2024-07-13 22:20:40.549091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.359 [2024-07-13 22:20:40.549271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.359 [2024-07-13 22:20:40.549305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.359 [2024-07-13 22:20:40.549327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.359 [2024-07-13 22:20:40.549346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.359 [2024-07-13 22:20:40.549385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.359 qpair failed and we were unable to recover it. 00:37:21.359 [2024-07-13 22:20:40.559228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.359 [2024-07-13 22:20:40.559444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.359 [2024-07-13 22:20:40.559479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.359 [2024-07-13 22:20:40.559501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.359 [2024-07-13 22:20:40.559523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.359 [2024-07-13 22:20:40.559578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.359 qpair failed and we were unable to recover it. 00:37:21.359 [2024-07-13 22:20:40.569243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.359 [2024-07-13 22:20:40.569416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.359 [2024-07-13 22:20:40.569450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.359 [2024-07-13 22:20:40.569487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.359 [2024-07-13 22:20:40.569510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.359 [2024-07-13 22:20:40.569565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.359 qpair failed and we were unable to recover it. 00:37:21.359 [2024-07-13 22:20:40.579277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.359 [2024-07-13 22:20:40.579460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.359 [2024-07-13 22:20:40.579493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.359 [2024-07-13 22:20:40.579530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.359 [2024-07-13 22:20:40.579549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.359 [2024-07-13 22:20:40.579602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.359 qpair failed and we were unable to recover it. 00:37:21.359 [2024-07-13 22:20:40.589292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.359 [2024-07-13 22:20:40.589466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.359 [2024-07-13 22:20:40.589500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.359 [2024-07-13 22:20:40.589537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.359 [2024-07-13 22:20:40.589555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.359 [2024-07-13 22:20:40.589608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.359 qpair failed and we were unable to recover it. 00:37:21.359 [2024-07-13 22:20:40.599349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.359 [2024-07-13 22:20:40.599539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.359 [2024-07-13 22:20:40.599588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.359 [2024-07-13 22:20:40.599610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.359 [2024-07-13 22:20:40.599627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.359 [2024-07-13 22:20:40.599680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.359 qpair failed and we were unable to recover it. 00:37:21.359 [2024-07-13 22:20:40.609301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.359 [2024-07-13 22:20:40.609469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.359 [2024-07-13 22:20:40.609502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.359 [2024-07-13 22:20:40.609525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.359 [2024-07-13 22:20:40.609559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.359 [2024-07-13 22:20:40.609619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.359 qpair failed and we were unable to recover it. 00:37:21.359 [2024-07-13 22:20:40.619389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.359 [2024-07-13 22:20:40.619568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.359 [2024-07-13 22:20:40.619601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.359 [2024-07-13 22:20:40.619623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.359 [2024-07-13 22:20:40.619642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.359 [2024-07-13 22:20:40.619681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.359 qpair failed and we were unable to recover it. 00:37:21.359 [2024-07-13 22:20:40.629437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.359 [2024-07-13 22:20:40.629625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.359 [2024-07-13 22:20:40.629673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.359 [2024-07-13 22:20:40.629695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.359 [2024-07-13 22:20:40.629712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.359 [2024-07-13 22:20:40.629764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.359 qpair failed and we were unable to recover it. 00:37:21.359 [2024-07-13 22:20:40.639464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.359 [2024-07-13 22:20:40.639659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.359 [2024-07-13 22:20:40.639692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.359 [2024-07-13 22:20:40.639714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.359 [2024-07-13 22:20:40.639732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.359 [2024-07-13 22:20:40.639771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.359 qpair failed and we were unable to recover it. 00:37:21.359 [2024-07-13 22:20:40.649475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.359 [2024-07-13 22:20:40.649658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.359 [2024-07-13 22:20:40.649691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.359 [2024-07-13 22:20:40.649714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.359 [2024-07-13 22:20:40.649731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.359 [2024-07-13 22:20:40.649770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.359 qpair failed and we were unable to recover it. 00:37:21.359 [2024-07-13 22:20:40.659458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.359 [2024-07-13 22:20:40.659641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.359 [2024-07-13 22:20:40.659674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.359 [2024-07-13 22:20:40.659705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.360 [2024-07-13 22:20:40.659725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.360 [2024-07-13 22:20:40.659764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.360 qpair failed and we were unable to recover it. 00:37:21.360 [2024-07-13 22:20:40.669541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.360 [2024-07-13 22:20:40.669717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.360 [2024-07-13 22:20:40.669751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.360 [2024-07-13 22:20:40.669773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.360 [2024-07-13 22:20:40.669791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.360 [2024-07-13 22:20:40.669830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.360 qpair failed and we were unable to recover it. 00:37:21.360 [2024-07-13 22:20:40.679553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.360 [2024-07-13 22:20:40.679745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.360 [2024-07-13 22:20:40.679778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.360 [2024-07-13 22:20:40.679800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.360 [2024-07-13 22:20:40.679818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.360 [2024-07-13 22:20:40.679857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.360 qpair failed and we were unable to recover it. 00:37:21.360 [2024-07-13 22:20:40.689541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.360 [2024-07-13 22:20:40.689714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.360 [2024-07-13 22:20:40.689746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.360 [2024-07-13 22:20:40.689768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.360 [2024-07-13 22:20:40.689786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.360 [2024-07-13 22:20:40.689825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.360 qpair failed and we were unable to recover it. 00:37:21.360 [2024-07-13 22:20:40.699618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.360 [2024-07-13 22:20:40.699791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.360 [2024-07-13 22:20:40.699825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.360 [2024-07-13 22:20:40.699846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.360 [2024-07-13 22:20:40.699875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.360 [2024-07-13 22:20:40.699920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.360 qpair failed and we were unable to recover it. 00:37:21.360 [2024-07-13 22:20:40.709673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.360 [2024-07-13 22:20:40.709903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.360 [2024-07-13 22:20:40.709936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.360 [2024-07-13 22:20:40.709958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.360 [2024-07-13 22:20:40.709977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.360 [2024-07-13 22:20:40.710016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.360 qpair failed and we were unable to recover it. 00:37:21.360 [2024-07-13 22:20:40.719671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.360 [2024-07-13 22:20:40.719900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.360 [2024-07-13 22:20:40.719931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.360 [2024-07-13 22:20:40.719952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.360 [2024-07-13 22:20:40.719969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.360 [2024-07-13 22:20:40.720008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.360 qpair failed and we were unable to recover it. 00:37:21.360 [2024-07-13 22:20:40.729715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.360 [2024-07-13 22:20:40.729902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.360 [2024-07-13 22:20:40.729936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.360 [2024-07-13 22:20:40.729973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.360 [2024-07-13 22:20:40.729993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.360 [2024-07-13 22:20:40.730033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.360 qpair failed and we were unable to recover it. 00:37:21.360 [2024-07-13 22:20:40.739685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.360 [2024-07-13 22:20:40.739860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.360 [2024-07-13 22:20:40.739901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.360 [2024-07-13 22:20:40.739923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.360 [2024-07-13 22:20:40.739942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.360 [2024-07-13 22:20:40.739982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.360 qpair failed and we were unable to recover it. 00:37:21.360 [2024-07-13 22:20:40.749745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.360 [2024-07-13 22:20:40.749964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.360 [2024-07-13 22:20:40.750003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.360 [2024-07-13 22:20:40.750027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.360 [2024-07-13 22:20:40.750046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.360 [2024-07-13 22:20:40.750086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.360 qpair failed and we were unable to recover it. 00:37:21.620 [2024-07-13 22:20:40.759826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.620 [2024-07-13 22:20:40.760049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.620 [2024-07-13 22:20:40.760083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.620 [2024-07-13 22:20:40.760105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.620 [2024-07-13 22:20:40.760123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.620 [2024-07-13 22:20:40.760163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.620 qpair failed and we were unable to recover it. 00:37:21.620 [2024-07-13 22:20:40.769759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.620 [2024-07-13 22:20:40.769954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.620 [2024-07-13 22:20:40.769988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.620 [2024-07-13 22:20:40.770010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.620 [2024-07-13 22:20:40.770028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.620 [2024-07-13 22:20:40.770067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.620 qpair failed and we were unable to recover it. 00:37:21.620 [2024-07-13 22:20:40.779835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.620 [2024-07-13 22:20:40.780067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.620 [2024-07-13 22:20:40.780101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.620 [2024-07-13 22:20:40.780123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.620 [2024-07-13 22:20:40.780141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.620 [2024-07-13 22:20:40.780180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.620 qpair failed and we were unable to recover it. 00:37:21.620 [2024-07-13 22:20:40.789833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.620 [2024-07-13 22:20:40.790018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.620 [2024-07-13 22:20:40.790052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.620 [2024-07-13 22:20:40.790074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.620 [2024-07-13 22:20:40.790092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.620 [2024-07-13 22:20:40.790138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.620 qpair failed and we were unable to recover it. 00:37:21.620 [2024-07-13 22:20:40.799942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.621 [2024-07-13 22:20:40.800136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.621 [2024-07-13 22:20:40.800169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.621 [2024-07-13 22:20:40.800191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.621 [2024-07-13 22:20:40.800209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.621 [2024-07-13 22:20:40.800248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.621 qpair failed and we were unable to recover it. 00:37:21.621 [2024-07-13 22:20:40.809899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.621 [2024-07-13 22:20:40.810076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.621 [2024-07-13 22:20:40.810115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.621 [2024-07-13 22:20:40.810137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.621 [2024-07-13 22:20:40.810156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.621 [2024-07-13 22:20:40.810195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.621 qpair failed and we were unable to recover it. 00:37:21.621 [2024-07-13 22:20:40.819984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.621 [2024-07-13 22:20:40.820163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.621 [2024-07-13 22:20:40.820197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.621 [2024-07-13 22:20:40.820219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.621 [2024-07-13 22:20:40.820237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.621 [2024-07-13 22:20:40.820275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.621 qpair failed and we were unable to recover it. 00:37:21.621 [2024-07-13 22:20:40.829991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.621 [2024-07-13 22:20:40.830163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.621 [2024-07-13 22:20:40.830196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.621 [2024-07-13 22:20:40.830218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.621 [2024-07-13 22:20:40.830237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.621 [2024-07-13 22:20:40.830275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.621 qpair failed and we were unable to recover it. 00:37:21.621 [2024-07-13 22:20:40.840028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.621 [2024-07-13 22:20:40.840217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.621 [2024-07-13 22:20:40.840255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.621 [2024-07-13 22:20:40.840278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.621 [2024-07-13 22:20:40.840295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.621 [2024-07-13 22:20:40.840335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.621 qpair failed and we were unable to recover it. 00:37:21.621 [2024-07-13 22:20:40.850025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.621 [2024-07-13 22:20:40.850223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.621 [2024-07-13 22:20:40.850258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.621 [2024-07-13 22:20:40.850281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.621 [2024-07-13 22:20:40.850300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.621 [2024-07-13 22:20:40.850340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.621 qpair failed and we were unable to recover it. 00:37:21.621 [2024-07-13 22:20:40.860078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.621 [2024-07-13 22:20:40.860307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.621 [2024-07-13 22:20:40.860340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.621 [2024-07-13 22:20:40.860362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.621 [2024-07-13 22:20:40.860381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.621 [2024-07-13 22:20:40.860421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.621 qpair failed and we were unable to recover it. 00:37:21.621 [2024-07-13 22:20:40.870165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.621 [2024-07-13 22:20:40.870395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.621 [2024-07-13 22:20:40.870429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.621 [2024-07-13 22:20:40.870451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.621 [2024-07-13 22:20:40.870469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.621 [2024-07-13 22:20:40.870508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.621 qpair failed and we were unable to recover it. 00:37:21.621 [2024-07-13 22:20:40.880133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.621 [2024-07-13 22:20:40.880319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.621 [2024-07-13 22:20:40.880352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.621 [2024-07-13 22:20:40.880374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.621 [2024-07-13 22:20:40.880397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.621 [2024-07-13 22:20:40.880441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.621 qpair failed and we were unable to recover it. 00:37:21.621 [2024-07-13 22:20:40.890143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.621 [2024-07-13 22:20:40.890331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.621 [2024-07-13 22:20:40.890364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.621 [2024-07-13 22:20:40.890386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.621 [2024-07-13 22:20:40.890404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.621 [2024-07-13 22:20:40.890443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.621 qpair failed and we were unable to recover it. 00:37:21.621 [2024-07-13 22:20:40.900209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.621 [2024-07-13 22:20:40.900396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.621 [2024-07-13 22:20:40.900428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.621 [2024-07-13 22:20:40.900450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.621 [2024-07-13 22:20:40.900468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.621 [2024-07-13 22:20:40.900507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.621 qpair failed and we were unable to recover it. 00:37:21.621 [2024-07-13 22:20:40.910220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.621 [2024-07-13 22:20:40.910395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.621 [2024-07-13 22:20:40.910428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.621 [2024-07-13 22:20:40.910450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.621 [2024-07-13 22:20:40.910468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.621 [2024-07-13 22:20:40.910507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.621 qpair failed and we were unable to recover it. 00:37:21.621 [2024-07-13 22:20:40.920269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.621 [2024-07-13 22:20:40.920448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.621 [2024-07-13 22:20:40.920480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.621 [2024-07-13 22:20:40.920502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.621 [2024-07-13 22:20:40.920523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.621 [2024-07-13 22:20:40.920562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.621 qpair failed and we were unable to recover it. 00:37:21.621 [2024-07-13 22:20:40.930243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.621 [2024-07-13 22:20:40.930431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.621 [2024-07-13 22:20:40.930465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.621 [2024-07-13 22:20:40.930488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.621 [2024-07-13 22:20:40.930506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.621 [2024-07-13 22:20:40.930545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.621 qpair failed and we were unable to recover it. 00:37:21.621 [2024-07-13 22:20:40.940332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.622 [2024-07-13 22:20:40.940525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.622 [2024-07-13 22:20:40.940562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.622 [2024-07-13 22:20:40.940587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.622 [2024-07-13 22:20:40.940606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.622 [2024-07-13 22:20:40.940646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.622 qpair failed and we were unable to recover it. 00:37:21.622 [2024-07-13 22:20:40.950320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.622 [2024-07-13 22:20:40.950491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.622 [2024-07-13 22:20:40.950526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.622 [2024-07-13 22:20:40.950548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.622 [2024-07-13 22:20:40.950567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.622 [2024-07-13 22:20:40.950607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.622 qpair failed and we were unable to recover it. 00:37:21.622 [2024-07-13 22:20:40.960451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.622 [2024-07-13 22:20:40.960650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.622 [2024-07-13 22:20:40.960683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.622 [2024-07-13 22:20:40.960706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.622 [2024-07-13 22:20:40.960723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.622 [2024-07-13 22:20:40.960763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.622 qpair failed and we were unable to recover it. 00:37:21.622 [2024-07-13 22:20:40.970388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.622 [2024-07-13 22:20:40.970566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.622 [2024-07-13 22:20:40.970600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.622 [2024-07-13 22:20:40.970622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.622 [2024-07-13 22:20:40.970646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.622 [2024-07-13 22:20:40.970686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.622 qpair failed and we were unable to recover it. 00:37:21.622 [2024-07-13 22:20:40.980447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.622 [2024-07-13 22:20:40.980632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.622 [2024-07-13 22:20:40.980665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.622 [2024-07-13 22:20:40.980687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.622 [2024-07-13 22:20:40.980705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.622 [2024-07-13 22:20:40.980744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.622 qpair failed and we were unable to recover it. 00:37:21.622 [2024-07-13 22:20:40.990505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.622 [2024-07-13 22:20:40.990688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.622 [2024-07-13 22:20:40.990736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.622 [2024-07-13 22:20:40.990759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.622 [2024-07-13 22:20:40.990776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.622 [2024-07-13 22:20:40.990831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.622 qpair failed and we were unable to recover it. 00:37:21.622 [2024-07-13 22:20:41.000566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.622 [2024-07-13 22:20:41.000800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.622 [2024-07-13 22:20:41.000833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.622 [2024-07-13 22:20:41.000861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.622 [2024-07-13 22:20:41.000888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.622 [2024-07-13 22:20:41.000935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.622 qpair failed and we were unable to recover it. 00:37:21.622 [2024-07-13 22:20:41.010542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.622 [2024-07-13 22:20:41.010780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.622 [2024-07-13 22:20:41.010823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.622 [2024-07-13 22:20:41.010846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.622 [2024-07-13 22:20:41.010872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.622 [2024-07-13 22:20:41.010916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.622 qpair failed and we were unable to recover it. 00:37:21.882 [2024-07-13 22:20:41.020555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.882 [2024-07-13 22:20:41.020739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.882 [2024-07-13 22:20:41.020773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.882 [2024-07-13 22:20:41.020796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.882 [2024-07-13 22:20:41.020814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.882 [2024-07-13 22:20:41.020863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.882 qpair failed and we were unable to recover it. 00:37:21.882 [2024-07-13 22:20:41.030539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.882 [2024-07-13 22:20:41.030708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.882 [2024-07-13 22:20:41.030742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.882 [2024-07-13 22:20:41.030765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.882 [2024-07-13 22:20:41.030784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.882 [2024-07-13 22:20:41.030824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.882 qpair failed and we were unable to recover it. 00:37:21.882 [2024-07-13 22:20:41.040632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.882 [2024-07-13 22:20:41.040824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.882 [2024-07-13 22:20:41.040857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.882 [2024-07-13 22:20:41.040889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.882 [2024-07-13 22:20:41.040909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.882 [2024-07-13 22:20:41.040948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.882 qpair failed and we were unable to recover it. 00:37:21.882 [2024-07-13 22:20:41.050640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.882 [2024-07-13 22:20:41.050812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.882 [2024-07-13 22:20:41.050846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.882 [2024-07-13 22:20:41.050875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.882 [2024-07-13 22:20:41.050897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.882 [2024-07-13 22:20:41.050937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.882 qpair failed and we were unable to recover it. 00:37:21.882 [2024-07-13 22:20:41.060693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.882 [2024-07-13 22:20:41.060888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.882 [2024-07-13 22:20:41.060922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.882 [2024-07-13 22:20:41.060952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.882 [2024-07-13 22:20:41.060972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.882 [2024-07-13 22:20:41.061012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.882 qpair failed and we were unable to recover it. 00:37:21.882 [2024-07-13 22:20:41.070720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.882 [2024-07-13 22:20:41.070911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.882 [2024-07-13 22:20:41.070944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.882 [2024-07-13 22:20:41.070966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.882 [2024-07-13 22:20:41.070985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.882 [2024-07-13 22:20:41.071023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.882 qpair failed and we were unable to recover it. 00:37:21.882 [2024-07-13 22:20:41.080740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.882 [2024-07-13 22:20:41.080936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.882 [2024-07-13 22:20:41.080970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.882 [2024-07-13 22:20:41.080991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.882 [2024-07-13 22:20:41.081009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.882 [2024-07-13 22:20:41.081049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.882 qpair failed and we were unable to recover it. 00:37:21.882 [2024-07-13 22:20:41.090759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.882 [2024-07-13 22:20:41.090936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.882 [2024-07-13 22:20:41.090970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.882 [2024-07-13 22:20:41.090992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.883 [2024-07-13 22:20:41.091010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.883 [2024-07-13 22:20:41.091049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.883 qpair failed and we were unable to recover it. 00:37:21.883 [2024-07-13 22:20:41.100774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.883 [2024-07-13 22:20:41.100962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.883 [2024-07-13 22:20:41.100995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.883 [2024-07-13 22:20:41.101017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.883 [2024-07-13 22:20:41.101036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.883 [2024-07-13 22:20:41.101075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.883 qpair failed and we were unable to recover it. 00:37:21.883 [2024-07-13 22:20:41.110791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.883 [2024-07-13 22:20:41.110979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.883 [2024-07-13 22:20:41.111012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.883 [2024-07-13 22:20:41.111034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.883 [2024-07-13 22:20:41.111052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.883 [2024-07-13 22:20:41.111091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.883 qpair failed and we were unable to recover it. 00:37:21.883 [2024-07-13 22:20:41.120841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.883 [2024-07-13 22:20:41.121032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.883 [2024-07-13 22:20:41.121065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.883 [2024-07-13 22:20:41.121087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.883 [2024-07-13 22:20:41.121104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.883 [2024-07-13 22:20:41.121144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.883 qpair failed and we were unable to recover it. 00:37:21.883 [2024-07-13 22:20:41.130915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.883 [2024-07-13 22:20:41.131095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.883 [2024-07-13 22:20:41.131129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.883 [2024-07-13 22:20:41.131151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.883 [2024-07-13 22:20:41.131170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.883 [2024-07-13 22:20:41.131224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.883 qpair failed and we were unable to recover it. 00:37:21.883 [2024-07-13 22:20:41.140893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.883 [2024-07-13 22:20:41.141071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.883 [2024-07-13 22:20:41.141105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.883 [2024-07-13 22:20:41.141127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.883 [2024-07-13 22:20:41.141146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.883 [2024-07-13 22:20:41.141185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.883 qpair failed and we were unable to recover it. 00:37:21.883 [2024-07-13 22:20:41.150941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.883 [2024-07-13 22:20:41.151116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.883 [2024-07-13 22:20:41.151169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.883 [2024-07-13 22:20:41.151191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.883 [2024-07-13 22:20:41.151209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.883 [2024-07-13 22:20:41.151264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.883 qpair failed and we were unable to recover it. 00:37:21.883 [2024-07-13 22:20:41.160979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.883 [2024-07-13 22:20:41.161159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.883 [2024-07-13 22:20:41.161192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.883 [2024-07-13 22:20:41.161214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.883 [2024-07-13 22:20:41.161232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.883 [2024-07-13 22:20:41.161272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.883 qpair failed and we were unable to recover it. 00:37:21.883 [2024-07-13 22:20:41.170933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.883 [2024-07-13 22:20:41.171104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.883 [2024-07-13 22:20:41.171137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.883 [2024-07-13 22:20:41.171159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.883 [2024-07-13 22:20:41.171178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.883 [2024-07-13 22:20:41.171217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.883 qpair failed and we were unable to recover it. 00:37:21.883 [2024-07-13 22:20:41.181060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.883 [2024-07-13 22:20:41.181265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.883 [2024-07-13 22:20:41.181298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.883 [2024-07-13 22:20:41.181319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.883 [2024-07-13 22:20:41.181337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.883 [2024-07-13 22:20:41.181390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.883 qpair failed and we were unable to recover it. 00:37:21.883 [2024-07-13 22:20:41.191015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.883 [2024-07-13 22:20:41.191187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.883 [2024-07-13 22:20:41.191220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.883 [2024-07-13 22:20:41.191257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.883 [2024-07-13 22:20:41.191275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.883 [2024-07-13 22:20:41.191334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.883 qpair failed and we were unable to recover it. 00:37:21.883 [2024-07-13 22:20:41.201092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.883 [2024-07-13 22:20:41.201303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.883 [2024-07-13 22:20:41.201342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.883 [2024-07-13 22:20:41.201364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.883 [2024-07-13 22:20:41.201381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.883 [2024-07-13 22:20:41.201420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.883 qpair failed and we were unable to recover it. 00:37:21.883 [2024-07-13 22:20:41.211116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.883 [2024-07-13 22:20:41.211307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.883 [2024-07-13 22:20:41.211355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.883 [2024-07-13 22:20:41.211378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.883 [2024-07-13 22:20:41.211395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.883 [2024-07-13 22:20:41.211448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.883 qpair failed and we were unable to recover it. 00:37:21.883 [2024-07-13 22:20:41.221113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.883 [2024-07-13 22:20:41.221316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.883 [2024-07-13 22:20:41.221349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.883 [2024-07-13 22:20:41.221371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.883 [2024-07-13 22:20:41.221388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.883 [2024-07-13 22:20:41.221444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.883 qpair failed and we were unable to recover it. 00:37:21.883 [2024-07-13 22:20:41.231169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.883 [2024-07-13 22:20:41.231345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.883 [2024-07-13 22:20:41.231378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.883 [2024-07-13 22:20:41.231400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.883 [2024-07-13 22:20:41.231418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.883 [2024-07-13 22:20:41.231457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.884 qpair failed and we were unable to recover it. 00:37:21.884 [2024-07-13 22:20:41.241215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.884 [2024-07-13 22:20:41.241400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.884 [2024-07-13 22:20:41.241438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.884 [2024-07-13 22:20:41.241461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.884 [2024-07-13 22:20:41.241492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.884 [2024-07-13 22:20:41.241532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.884 qpair failed and we were unable to recover it. 00:37:21.884 [2024-07-13 22:20:41.251213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.884 [2024-07-13 22:20:41.251389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.884 [2024-07-13 22:20:41.251422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.884 [2024-07-13 22:20:41.251459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.884 [2024-07-13 22:20:41.251478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.884 [2024-07-13 22:20:41.251532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.884 qpair failed and we were unable to recover it. 00:37:21.884 [2024-07-13 22:20:41.261299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.884 [2024-07-13 22:20:41.261513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.884 [2024-07-13 22:20:41.261546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.884 [2024-07-13 22:20:41.261568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.884 [2024-07-13 22:20:41.261587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.884 [2024-07-13 22:20:41.261626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.884 qpair failed and we were unable to recover it. 00:37:21.884 [2024-07-13 22:20:41.271311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:21.884 [2024-07-13 22:20:41.271487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:21.884 [2024-07-13 22:20:41.271521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:21.884 [2024-07-13 22:20:41.271543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:21.884 [2024-07-13 22:20:41.271561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:21.884 [2024-07-13 22:20:41.271608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:21.884 qpair failed and we were unable to recover it. 00:37:22.143 [2024-07-13 22:20:41.281379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.143 [2024-07-13 22:20:41.281577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.143 [2024-07-13 22:20:41.281611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.143 [2024-07-13 22:20:41.281633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.143 [2024-07-13 22:20:41.281652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.143 [2024-07-13 22:20:41.281698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.143 qpair failed and we were unable to recover it. 00:37:22.143 [2024-07-13 22:20:41.291326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.143 [2024-07-13 22:20:41.291516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.143 [2024-07-13 22:20:41.291550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.143 [2024-07-13 22:20:41.291572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.143 [2024-07-13 22:20:41.291590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.143 [2024-07-13 22:20:41.291630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.143 qpair failed and we were unable to recover it. 00:37:22.143 [2024-07-13 22:20:41.301377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.143 [2024-07-13 22:20:41.301577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.143 [2024-07-13 22:20:41.301610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.143 [2024-07-13 22:20:41.301633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.143 [2024-07-13 22:20:41.301650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.143 [2024-07-13 22:20:41.301690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.143 qpair failed and we were unable to recover it. 00:37:22.143 [2024-07-13 22:20:41.311462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.143 [2024-07-13 22:20:41.311645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.143 [2024-07-13 22:20:41.311679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.143 [2024-07-13 22:20:41.311701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.143 [2024-07-13 22:20:41.311719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.143 [2024-07-13 22:20:41.311758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.143 qpair failed and we were unable to recover it. 00:37:22.143 [2024-07-13 22:20:41.321471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.143 [2024-07-13 22:20:41.321659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.143 [2024-07-13 22:20:41.321693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.143 [2024-07-13 22:20:41.321715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.143 [2024-07-13 22:20:41.321733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.143 [2024-07-13 22:20:41.321772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.143 qpair failed and we were unable to recover it. 00:37:22.143 [2024-07-13 22:20:41.331421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.143 [2024-07-13 22:20:41.331603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.143 [2024-07-13 22:20:41.331636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.143 [2024-07-13 22:20:41.331658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.143 [2024-07-13 22:20:41.331677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.143 [2024-07-13 22:20:41.331716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.143 qpair failed and we were unable to recover it. 00:37:22.143 [2024-07-13 22:20:41.341525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.143 [2024-07-13 22:20:41.341763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.143 [2024-07-13 22:20:41.341795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.143 [2024-07-13 22:20:41.341816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.143 [2024-07-13 22:20:41.341834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.143 [2024-07-13 22:20:41.341897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.143 qpair failed and we were unable to recover it. 00:37:22.143 [2024-07-13 22:20:41.351501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.143 [2024-07-13 22:20:41.351674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.143 [2024-07-13 22:20:41.351707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.143 [2024-07-13 22:20:41.351728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.143 [2024-07-13 22:20:41.351747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.143 [2024-07-13 22:20:41.351787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.143 qpair failed and we were unable to recover it. 00:37:22.143 [2024-07-13 22:20:41.361661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.143 [2024-07-13 22:20:41.361915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.143 [2024-07-13 22:20:41.361964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.143 [2024-07-13 22:20:41.361986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.143 [2024-07-13 22:20:41.362004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.143 [2024-07-13 22:20:41.362044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.143 qpair failed and we were unable to recover it. 00:37:22.143 [2024-07-13 22:20:41.371592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.143 [2024-07-13 22:20:41.371763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.143 [2024-07-13 22:20:41.371796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.143 [2024-07-13 22:20:41.371834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.143 [2024-07-13 22:20:41.371857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.143 [2024-07-13 22:20:41.371922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.143 qpair failed and we were unable to recover it. 00:37:22.143 [2024-07-13 22:20:41.381596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.144 [2024-07-13 22:20:41.381783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.144 [2024-07-13 22:20:41.381816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.144 [2024-07-13 22:20:41.381838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.144 [2024-07-13 22:20:41.381856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.144 [2024-07-13 22:20:41.381905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.144 qpair failed and we were unable to recover it. 00:37:22.144 [2024-07-13 22:20:41.391623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.144 [2024-07-13 22:20:41.391798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.144 [2024-07-13 22:20:41.391831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.144 [2024-07-13 22:20:41.391852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.144 [2024-07-13 22:20:41.391879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.144 [2024-07-13 22:20:41.391926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.144 qpair failed and we were unable to recover it. 00:37:22.144 [2024-07-13 22:20:41.401671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.144 [2024-07-13 22:20:41.401849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.144 [2024-07-13 22:20:41.401892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.144 [2024-07-13 22:20:41.401915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.144 [2024-07-13 22:20:41.401936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.144 [2024-07-13 22:20:41.401976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.144 qpair failed and we were unable to recover it. 00:37:22.144 [2024-07-13 22:20:41.411650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.144 [2024-07-13 22:20:41.411813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.144 [2024-07-13 22:20:41.411846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.144 [2024-07-13 22:20:41.411875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.144 [2024-07-13 22:20:41.411898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.144 [2024-07-13 22:20:41.411938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.144 qpair failed and we were unable to recover it. 00:37:22.144 [2024-07-13 22:20:41.421728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.144 [2024-07-13 22:20:41.421904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.144 [2024-07-13 22:20:41.421937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.144 [2024-07-13 22:20:41.421958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.144 [2024-07-13 22:20:41.421977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.144 [2024-07-13 22:20:41.422017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.144 qpair failed and we were unable to recover it. 00:37:22.144 [2024-07-13 22:20:41.431734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.144 [2024-07-13 22:20:41.431946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.144 [2024-07-13 22:20:41.431979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.144 [2024-07-13 22:20:41.432001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.144 [2024-07-13 22:20:41.432019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.144 [2024-07-13 22:20:41.432058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.144 qpair failed and we were unable to recover it. 00:37:22.144 [2024-07-13 22:20:41.441839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.144 [2024-07-13 22:20:41.442036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.144 [2024-07-13 22:20:41.442069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.144 [2024-07-13 22:20:41.442091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.144 [2024-07-13 22:20:41.442109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.144 [2024-07-13 22:20:41.442150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.144 qpair failed and we were unable to recover it. 00:37:22.144 [2024-07-13 22:20:41.451798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.144 [2024-07-13 22:20:41.451983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.144 [2024-07-13 22:20:41.452014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.144 [2024-07-13 22:20:41.452036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.144 [2024-07-13 22:20:41.452053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.144 [2024-07-13 22:20:41.452092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.144 qpair failed and we were unable to recover it. 00:37:22.144 [2024-07-13 22:20:41.461999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.144 [2024-07-13 22:20:41.462182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.144 [2024-07-13 22:20:41.462229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.144 [2024-07-13 22:20:41.462256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.144 [2024-07-13 22:20:41.462273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.144 [2024-07-13 22:20:41.462327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.144 qpair failed and we were unable to recover it. 00:37:22.144 [2024-07-13 22:20:41.471891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.144 [2024-07-13 22:20:41.472065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.144 [2024-07-13 22:20:41.472099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.144 [2024-07-13 22:20:41.472121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.144 [2024-07-13 22:20:41.472139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.144 [2024-07-13 22:20:41.472193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.144 qpair failed and we were unable to recover it. 00:37:22.144 [2024-07-13 22:20:41.481933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.144 [2024-07-13 22:20:41.482118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.144 [2024-07-13 22:20:41.482151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.144 [2024-07-13 22:20:41.482173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.144 [2024-07-13 22:20:41.482190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.144 [2024-07-13 22:20:41.482229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.144 qpair failed and we were unable to recover it. 00:37:22.144 [2024-07-13 22:20:41.491920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.144 [2024-07-13 22:20:41.492107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.144 [2024-07-13 22:20:41.492139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.144 [2024-07-13 22:20:41.492161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.144 [2024-07-13 22:20:41.492194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.144 [2024-07-13 22:20:41.492247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.144 qpair failed and we were unable to recover it. 00:37:22.144 [2024-07-13 22:20:41.502009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.144 [2024-07-13 22:20:41.502187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.144 [2024-07-13 22:20:41.502243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.144 [2024-07-13 22:20:41.502265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.144 [2024-07-13 22:20:41.502282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.144 [2024-07-13 22:20:41.502334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.144 qpair failed and we were unable to recover it. 00:37:22.144 [2024-07-13 22:20:41.511986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.144 [2024-07-13 22:20:41.512155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.144 [2024-07-13 22:20:41.512204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.144 [2024-07-13 22:20:41.512226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.144 [2024-07-13 22:20:41.512243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.144 [2024-07-13 22:20:41.512295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.144 qpair failed and we were unable to recover it. 00:37:22.144 [2024-07-13 22:20:41.522049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.144 [2024-07-13 22:20:41.522276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.144 [2024-07-13 22:20:41.522309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.145 [2024-07-13 22:20:41.522331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.145 [2024-07-13 22:20:41.522349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.145 [2024-07-13 22:20:41.522388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.145 qpair failed and we were unable to recover it. 00:37:22.145 [2024-07-13 22:20:41.532121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.145 [2024-07-13 22:20:41.532333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.145 [2024-07-13 22:20:41.532377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.145 [2024-07-13 22:20:41.532400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.145 [2024-07-13 22:20:41.532421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.145 [2024-07-13 22:20:41.532460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.145 qpair failed and we were unable to recover it. 00:37:22.406 [2024-07-13 22:20:41.542087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.406 [2024-07-13 22:20:41.542262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.406 [2024-07-13 22:20:41.542296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.406 [2024-07-13 22:20:41.542318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.406 [2024-07-13 22:20:41.542336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.406 [2024-07-13 22:20:41.542375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-07-13 22:20:41.552103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.406 [2024-07-13 22:20:41.552282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.406 [2024-07-13 22:20:41.552316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.406 [2024-07-13 22:20:41.552343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.406 [2024-07-13 22:20:41.552362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.406 [2024-07-13 22:20:41.552401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-07-13 22:20:41.562203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.406 [2024-07-13 22:20:41.562400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.406 [2024-07-13 22:20:41.562434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.406 [2024-07-13 22:20:41.562456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.406 [2024-07-13 22:20:41.562474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.406 [2024-07-13 22:20:41.562512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-07-13 22:20:41.572136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.406 [2024-07-13 22:20:41.572341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.406 [2024-07-13 22:20:41.572374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.406 [2024-07-13 22:20:41.572395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.406 [2024-07-13 22:20:41.572412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.406 [2024-07-13 22:20:41.572465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-07-13 22:20:41.582223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.406 [2024-07-13 22:20:41.582397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.406 [2024-07-13 22:20:41.582444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.406 [2024-07-13 22:20:41.582466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.406 [2024-07-13 22:20:41.582483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.406 [2024-07-13 22:20:41.582536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-07-13 22:20:41.592215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.406 [2024-07-13 22:20:41.592395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.406 [2024-07-13 22:20:41.592433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.406 [2024-07-13 22:20:41.592455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.406 [2024-07-13 22:20:41.592472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.406 [2024-07-13 22:20:41.592511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-07-13 22:20:41.602329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.406 [2024-07-13 22:20:41.602515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.406 [2024-07-13 22:20:41.602564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.406 [2024-07-13 22:20:41.602586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.406 [2024-07-13 22:20:41.602603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.406 [2024-07-13 22:20:41.602656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.406 qpair failed and we were unable to recover it. 00:37:22.406 [2024-07-13 22:20:41.612296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.406 [2024-07-13 22:20:41.612469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.406 [2024-07-13 22:20:41.612502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.407 [2024-07-13 22:20:41.612523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.407 [2024-07-13 22:20:41.612541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.407 [2024-07-13 22:20:41.612588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-07-13 22:20:41.622321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.407 [2024-07-13 22:20:41.622513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.407 [2024-07-13 22:20:41.622546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.407 [2024-07-13 22:20:41.622568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.407 [2024-07-13 22:20:41.622585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.407 [2024-07-13 22:20:41.622624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-07-13 22:20:41.632375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.407 [2024-07-13 22:20:41.632572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.407 [2024-07-13 22:20:41.632617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.407 [2024-07-13 22:20:41.632639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.407 [2024-07-13 22:20:41.632655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.407 [2024-07-13 22:20:41.632694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-07-13 22:20:41.642473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.407 [2024-07-13 22:20:41.642730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.407 [2024-07-13 22:20:41.642777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.407 [2024-07-13 22:20:41.642800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.407 [2024-07-13 22:20:41.642818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.407 [2024-07-13 22:20:41.642877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-07-13 22:20:41.652391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.407 [2024-07-13 22:20:41.652561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.407 [2024-07-13 22:20:41.652594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.407 [2024-07-13 22:20:41.652616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.407 [2024-07-13 22:20:41.652633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.407 [2024-07-13 22:20:41.652671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-07-13 22:20:41.662491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.407 [2024-07-13 22:20:41.662678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.407 [2024-07-13 22:20:41.662713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.407 [2024-07-13 22:20:41.662735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.407 [2024-07-13 22:20:41.662753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.407 [2024-07-13 22:20:41.662792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-07-13 22:20:41.672517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.407 [2024-07-13 22:20:41.672745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.407 [2024-07-13 22:20:41.672778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.407 [2024-07-13 22:20:41.672799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.407 [2024-07-13 22:20:41.672816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.407 [2024-07-13 22:20:41.672882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-07-13 22:20:41.682590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.407 [2024-07-13 22:20:41.682809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.407 [2024-07-13 22:20:41.682875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.407 [2024-07-13 22:20:41.682905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.407 [2024-07-13 22:20:41.682923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.407 [2024-07-13 22:20:41.682967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-07-13 22:20:41.692618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.407 [2024-07-13 22:20:41.692827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.407 [2024-07-13 22:20:41.692885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.407 [2024-07-13 22:20:41.692910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.407 [2024-07-13 22:20:41.692928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.407 [2024-07-13 22:20:41.692968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-07-13 22:20:41.702573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.407 [2024-07-13 22:20:41.702810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.407 [2024-07-13 22:20:41.702854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.407 [2024-07-13 22:20:41.702913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.407 [2024-07-13 22:20:41.702944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.407 [2024-07-13 22:20:41.703002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-07-13 22:20:41.712574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.407 [2024-07-13 22:20:41.712752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.407 [2024-07-13 22:20:41.712786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.407 [2024-07-13 22:20:41.712808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.407 [2024-07-13 22:20:41.712826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.407 [2024-07-13 22:20:41.712873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-07-13 22:20:41.722659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.407 [2024-07-13 22:20:41.722852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.407 [2024-07-13 22:20:41.722892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.407 [2024-07-13 22:20:41.722914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.407 [2024-07-13 22:20:41.722932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.407 [2024-07-13 22:20:41.722972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-07-13 22:20:41.732605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.407 [2024-07-13 22:20:41.732783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.407 [2024-07-13 22:20:41.732822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.407 [2024-07-13 22:20:41.732845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.407 [2024-07-13 22:20:41.732863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.407 [2024-07-13 22:20:41.732912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-07-13 22:20:41.742685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.407 [2024-07-13 22:20:41.742880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.407 [2024-07-13 22:20:41.742913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.407 [2024-07-13 22:20:41.742939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.407 [2024-07-13 22:20:41.742957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.407 [2024-07-13 22:20:41.742996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.407 qpair failed and we were unable to recover it. 00:37:22.407 [2024-07-13 22:20:41.752677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.407 [2024-07-13 22:20:41.752875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.407 [2024-07-13 22:20:41.752919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.407 [2024-07-13 22:20:41.752941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.407 [2024-07-13 22:20:41.752958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.407 [2024-07-13 22:20:41.753012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-07-13 22:20:41.762788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.408 [2024-07-13 22:20:41.763020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.408 [2024-07-13 22:20:41.763054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.408 [2024-07-13 22:20:41.763076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.408 [2024-07-13 22:20:41.763093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.408 [2024-07-13 22:20:41.763132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-07-13 22:20:41.772746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.408 [2024-07-13 22:20:41.772937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.408 [2024-07-13 22:20:41.772971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.408 [2024-07-13 22:20:41.772992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.408 [2024-07-13 22:20:41.773015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.408 [2024-07-13 22:20:41.773055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-07-13 22:20:41.782824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.408 [2024-07-13 22:20:41.783018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.408 [2024-07-13 22:20:41.783052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.408 [2024-07-13 22:20:41.783074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.408 [2024-07-13 22:20:41.783091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.408 [2024-07-13 22:20:41.783155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.408 [2024-07-13 22:20:41.792847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.408 [2024-07-13 22:20:41.793042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.408 [2024-07-13 22:20:41.793076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.408 [2024-07-13 22:20:41.793099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.408 [2024-07-13 22:20:41.793117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.408 [2024-07-13 22:20:41.793156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.408 qpair failed and we were unable to recover it. 00:37:22.669 [2024-07-13 22:20:41.802850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.669 [2024-07-13 22:20:41.803067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.669 [2024-07-13 22:20:41.803106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.669 [2024-07-13 22:20:41.803130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.669 [2024-07-13 22:20:41.803148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.669 [2024-07-13 22:20:41.803188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.669 qpair failed and we were unable to recover it. 00:37:22.669 [2024-07-13 22:20:41.812822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.669 [2024-07-13 22:20:41.813003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.669 [2024-07-13 22:20:41.813037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.669 [2024-07-13 22:20:41.813060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.669 [2024-07-13 22:20:41.813078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.669 [2024-07-13 22:20:41.813117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.669 qpair failed and we were unable to recover it. 00:37:22.669 [2024-07-13 22:20:41.822908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.669 [2024-07-13 22:20:41.823093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.669 [2024-07-13 22:20:41.823126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.669 [2024-07-13 22:20:41.823148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.669 [2024-07-13 22:20:41.823166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.669 [2024-07-13 22:20:41.823204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.669 qpair failed and we were unable to recover it. 00:37:22.669 [2024-07-13 22:20:41.832953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.669 [2024-07-13 22:20:41.833157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.669 [2024-07-13 22:20:41.833206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.669 [2024-07-13 22:20:41.833228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.669 [2024-07-13 22:20:41.833245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.670 [2024-07-13 22:20:41.833298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.670 qpair failed and we were unable to recover it. 00:37:22.670 [2024-07-13 22:20:41.843013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.670 [2024-07-13 22:20:41.843246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.670 [2024-07-13 22:20:41.843278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.670 [2024-07-13 22:20:41.843299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.670 [2024-07-13 22:20:41.843317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.670 [2024-07-13 22:20:41.843369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.670 qpair failed and we were unable to recover it. 00:37:22.670 [2024-07-13 22:20:41.853032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.670 [2024-07-13 22:20:41.853272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.670 [2024-07-13 22:20:41.853305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.670 [2024-07-13 22:20:41.853326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.670 [2024-07-13 22:20:41.853343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.670 [2024-07-13 22:20:41.853396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.670 qpair failed and we were unable to recover it. 00:37:22.670 [2024-07-13 22:20:41.863010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.670 [2024-07-13 22:20:41.863244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.670 [2024-07-13 22:20:41.863277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.670 [2024-07-13 22:20:41.863304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.670 [2024-07-13 22:20:41.863322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.670 [2024-07-13 22:20:41.863360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.670 qpair failed and we were unable to recover it. 00:37:22.670 [2024-07-13 22:20:41.873102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.670 [2024-07-13 22:20:41.873284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.670 [2024-07-13 22:20:41.873332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.670 [2024-07-13 22:20:41.873353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.670 [2024-07-13 22:20:41.873370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.670 [2024-07-13 22:20:41.873423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.670 qpair failed and we were unable to recover it. 00:37:22.670 [2024-07-13 22:20:41.883148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.670 [2024-07-13 22:20:41.883342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.670 [2024-07-13 22:20:41.883390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.670 [2024-07-13 22:20:41.883412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.670 [2024-07-13 22:20:41.883429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.670 [2024-07-13 22:20:41.883482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.670 qpair failed and we were unable to recover it. 00:37:22.670 [2024-07-13 22:20:41.893086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.670 [2024-07-13 22:20:41.893276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.670 [2024-07-13 22:20:41.893309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.670 [2024-07-13 22:20:41.893331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.670 [2024-07-13 22:20:41.893348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.670 [2024-07-13 22:20:41.893386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.670 qpair failed and we were unable to recover it. 00:37:22.670 [2024-07-13 22:20:41.903179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.670 [2024-07-13 22:20:41.903358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.670 [2024-07-13 22:20:41.903392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.670 [2024-07-13 22:20:41.903414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.670 [2024-07-13 22:20:41.903432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.670 [2024-07-13 22:20:41.903470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.670 qpair failed and we were unable to recover it. 00:37:22.670 [2024-07-13 22:20:41.913127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.670 [2024-07-13 22:20:41.913329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.670 [2024-07-13 22:20:41.913362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.670 [2024-07-13 22:20:41.913384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.670 [2024-07-13 22:20:41.913402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.670 [2024-07-13 22:20:41.913440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.670 qpair failed and we were unable to recover it. 00:37:22.670 [2024-07-13 22:20:41.923242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.670 [2024-07-13 22:20:41.923511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.670 [2024-07-13 22:20:41.923545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.670 [2024-07-13 22:20:41.923567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.670 [2024-07-13 22:20:41.923583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.670 [2024-07-13 22:20:41.923637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.670 qpair failed and we were unable to recover it. 00:37:22.670 [2024-07-13 22:20:41.933245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.670 [2024-07-13 22:20:41.933419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.670 [2024-07-13 22:20:41.933453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.670 [2024-07-13 22:20:41.933475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.670 [2024-07-13 22:20:41.933493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.670 [2024-07-13 22:20:41.933531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.670 qpair failed and we were unable to recover it. 00:37:22.670 [2024-07-13 22:20:41.943211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.670 [2024-07-13 22:20:41.943413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.670 [2024-07-13 22:20:41.943446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.670 [2024-07-13 22:20:41.943467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.670 [2024-07-13 22:20:41.943485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.670 [2024-07-13 22:20:41.943524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.670 qpair failed and we were unable to recover it. 00:37:22.670 [2024-07-13 22:20:41.953337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.670 [2024-07-13 22:20:41.953554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.670 [2024-07-13 22:20:41.953587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.670 [2024-07-13 22:20:41.953614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.670 [2024-07-13 22:20:41.953632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.670 [2024-07-13 22:20:41.953671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.670 qpair failed and we were unable to recover it. 00:37:22.670 [2024-07-13 22:20:41.963370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.670 [2024-07-13 22:20:41.963568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.670 [2024-07-13 22:20:41.963601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.670 [2024-07-13 22:20:41.963623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.670 [2024-07-13 22:20:41.963641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.670 [2024-07-13 22:20:41.963679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.670 qpair failed and we were unable to recover it. 00:37:22.670 [2024-07-13 22:20:41.973319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.670 [2024-07-13 22:20:41.973498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.670 [2024-07-13 22:20:41.973547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.670 [2024-07-13 22:20:41.973569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.670 [2024-07-13 22:20:41.973586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.670 [2024-07-13 22:20:41.973638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.670 qpair failed and we were unable to recover it. 00:37:22.670 [2024-07-13 22:20:41.983375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.671 [2024-07-13 22:20:41.983582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.671 [2024-07-13 22:20:41.983621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.671 [2024-07-13 22:20:41.983643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.671 [2024-07-13 22:20:41.983659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.671 [2024-07-13 22:20:41.983712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.671 qpair failed and we were unable to recover it. 00:37:22.671 [2024-07-13 22:20:41.993355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.671 [2024-07-13 22:20:41.993555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.671 [2024-07-13 22:20:41.993588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.671 [2024-07-13 22:20:41.993610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.671 [2024-07-13 22:20:41.993627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.671 [2024-07-13 22:20:41.993666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.671 qpair failed and we were unable to recover it. 00:37:22.671 [2024-07-13 22:20:42.003448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.671 [2024-07-13 22:20:42.003635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.671 [2024-07-13 22:20:42.003669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.671 [2024-07-13 22:20:42.003691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.671 [2024-07-13 22:20:42.003710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.671 [2024-07-13 22:20:42.003748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.671 qpair failed and we were unable to recover it. 00:37:22.671 [2024-07-13 22:20:42.013475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.671 [2024-07-13 22:20:42.013665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.671 [2024-07-13 22:20:42.013699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.671 [2024-07-13 22:20:42.013734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.671 [2024-07-13 22:20:42.013752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.671 [2024-07-13 22:20:42.013805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.671 qpair failed and we were unable to recover it. 00:37:22.671 [2024-07-13 22:20:42.023730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.671 [2024-07-13 22:20:42.023927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.671 [2024-07-13 22:20:42.023961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.671 [2024-07-13 22:20:42.023983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.671 [2024-07-13 22:20:42.024001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.671 [2024-07-13 22:20:42.024039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.671 qpair failed and we were unable to recover it. 00:37:22.671 [2024-07-13 22:20:42.033531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.671 [2024-07-13 22:20:42.033719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.671 [2024-07-13 22:20:42.033767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.671 [2024-07-13 22:20:42.033789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.671 [2024-07-13 22:20:42.033807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.671 [2024-07-13 22:20:42.033859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.671 qpair failed and we were unable to recover it. 00:37:22.671 [2024-07-13 22:20:42.043557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.671 [2024-07-13 22:20:42.043744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.671 [2024-07-13 22:20:42.043783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.671 [2024-07-13 22:20:42.043806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.671 [2024-07-13 22:20:42.043824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.671 [2024-07-13 22:20:42.043863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.671 qpair failed and we were unable to recover it. 00:37:22.671 [2024-07-13 22:20:42.053567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.671 [2024-07-13 22:20:42.053767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.671 [2024-07-13 22:20:42.053800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.671 [2024-07-13 22:20:42.053822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.671 [2024-07-13 22:20:42.053839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.671 [2024-07-13 22:20:42.053884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.671 qpair failed and we were unable to recover it. 00:37:22.933 [2024-07-13 22:20:42.063609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.933 [2024-07-13 22:20:42.063794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.933 [2024-07-13 22:20:42.063828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.933 [2024-07-13 22:20:42.063850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.933 [2024-07-13 22:20:42.063876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.933 [2024-07-13 22:20:42.063918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.933 qpair failed and we were unable to recover it. 00:37:22.933 [2024-07-13 22:20:42.073620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.933 [2024-07-13 22:20:42.073818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.933 [2024-07-13 22:20:42.073853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.933 [2024-07-13 22:20:42.073883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.933 [2024-07-13 22:20:42.073909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.933 [2024-07-13 22:20:42.073948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.933 qpair failed and we were unable to recover it. 00:37:22.933 [2024-07-13 22:20:42.083741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.933 [2024-07-13 22:20:42.083939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.933 [2024-07-13 22:20:42.083973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.933 [2024-07-13 22:20:42.083996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.933 [2024-07-13 22:20:42.084014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.933 [2024-07-13 22:20:42.084059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.933 qpair failed and we were unable to recover it. 00:37:22.933 [2024-07-13 22:20:42.093728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.933 [2024-07-13 22:20:42.093910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.933 [2024-07-13 22:20:42.093945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.933 [2024-07-13 22:20:42.093967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.933 [2024-07-13 22:20:42.093985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.933 [2024-07-13 22:20:42.094024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.933 qpair failed and we were unable to recover it. 00:37:22.933 [2024-07-13 22:20:42.103688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.933 [2024-07-13 22:20:42.103881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.933 [2024-07-13 22:20:42.103914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.933 [2024-07-13 22:20:42.103937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.933 [2024-07-13 22:20:42.103955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.933 [2024-07-13 22:20:42.103994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.933 qpair failed and we were unable to recover it. 00:37:22.933 [2024-07-13 22:20:42.113837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.933 [2024-07-13 22:20:42.114074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.933 [2024-07-13 22:20:42.114112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.933 [2024-07-13 22:20:42.114136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.933 [2024-07-13 22:20:42.114169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.933 [2024-07-13 22:20:42.114223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.933 qpair failed and we were unable to recover it. 00:37:22.933 [2024-07-13 22:20:42.123986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.933 [2024-07-13 22:20:42.124168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.933 [2024-07-13 22:20:42.124202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.933 [2024-07-13 22:20:42.124224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.933 [2024-07-13 22:20:42.124242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.933 [2024-07-13 22:20:42.124280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.933 qpair failed and we were unable to recover it. 00:37:22.933 [2024-07-13 22:20:42.133909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.933 [2024-07-13 22:20:42.134075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.933 [2024-07-13 22:20:42.134114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.933 [2024-07-13 22:20:42.134138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.933 [2024-07-13 22:20:42.134155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.933 [2024-07-13 22:20:42.134194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.933 qpair failed and we were unable to recover it. 00:37:22.933 [2024-07-13 22:20:42.143873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.933 [2024-07-13 22:20:42.144106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.933 [2024-07-13 22:20:42.144139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.933 [2024-07-13 22:20:42.144163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.933 [2024-07-13 22:20:42.144180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.933 [2024-07-13 22:20:42.144220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.933 qpair failed and we were unable to recover it. 00:37:22.933 [2024-07-13 22:20:42.154018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.933 [2024-07-13 22:20:42.154202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.933 [2024-07-13 22:20:42.154235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.933 [2024-07-13 22:20:42.154258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.933 [2024-07-13 22:20:42.154276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.933 [2024-07-13 22:20:42.154314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.933 qpair failed and we were unable to recover it. 00:37:22.933 [2024-07-13 22:20:42.163930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.933 [2024-07-13 22:20:42.164125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.933 [2024-07-13 22:20:42.164158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.933 [2024-07-13 22:20:42.164180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.933 [2024-07-13 22:20:42.164197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.934 [2024-07-13 22:20:42.164236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.934 qpair failed and we were unable to recover it. 00:37:22.934 [2024-07-13 22:20:42.173978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.934 [2024-07-13 22:20:42.174172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.934 [2024-07-13 22:20:42.174205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.934 [2024-07-13 22:20:42.174228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.934 [2024-07-13 22:20:42.174251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.934 [2024-07-13 22:20:42.174297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.934 qpair failed and we were unable to recover it. 00:37:22.934 [2024-07-13 22:20:42.183962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.934 [2024-07-13 22:20:42.184145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.934 [2024-07-13 22:20:42.184179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.934 [2024-07-13 22:20:42.184201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.934 [2024-07-13 22:20:42.184218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.934 [2024-07-13 22:20:42.184257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.934 qpair failed and we were unable to recover it. 00:37:22.934 [2024-07-13 22:20:42.193958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.934 [2024-07-13 22:20:42.194137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.934 [2024-07-13 22:20:42.194170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.934 [2024-07-13 22:20:42.194193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.934 [2024-07-13 22:20:42.194211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.934 [2024-07-13 22:20:42.194250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.934 qpair failed and we were unable to recover it. 00:37:22.934 [2024-07-13 22:20:42.204036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.934 [2024-07-13 22:20:42.204229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.934 [2024-07-13 22:20:42.204263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.934 [2024-07-13 22:20:42.204285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.934 [2024-07-13 22:20:42.204303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.934 [2024-07-13 22:20:42.204342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.934 qpair failed and we were unable to recover it. 00:37:22.934 [2024-07-13 22:20:42.214023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.934 [2024-07-13 22:20:42.214195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.934 [2024-07-13 22:20:42.214228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.934 [2024-07-13 22:20:42.214249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.934 [2024-07-13 22:20:42.214266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.934 [2024-07-13 22:20:42.214305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.934 qpair failed and we were unable to recover it. 00:37:22.934 [2024-07-13 22:20:42.224069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.934 [2024-07-13 22:20:42.224264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.934 [2024-07-13 22:20:42.224298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.934 [2024-07-13 22:20:42.224320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.934 [2024-07-13 22:20:42.224337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.934 [2024-07-13 22:20:42.224376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.934 qpair failed and we were unable to recover it. 00:37:22.934 [2024-07-13 22:20:42.234146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.934 [2024-07-13 22:20:42.234351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.934 [2024-07-13 22:20:42.234384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.934 [2024-07-13 22:20:42.234406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.934 [2024-07-13 22:20:42.234424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.934 [2024-07-13 22:20:42.234462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.934 qpair failed and we were unable to recover it. 00:37:22.934 [2024-07-13 22:20:42.244141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.934 [2024-07-13 22:20:42.244322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.934 [2024-07-13 22:20:42.244355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.934 [2024-07-13 22:20:42.244377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.934 [2024-07-13 22:20:42.244394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.934 [2024-07-13 22:20:42.244432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.934 qpair failed and we were unable to recover it. 00:37:22.934 [2024-07-13 22:20:42.254189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.934 [2024-07-13 22:20:42.254369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.934 [2024-07-13 22:20:42.254416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.934 [2024-07-13 22:20:42.254438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.934 [2024-07-13 22:20:42.254455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.934 [2024-07-13 22:20:42.254508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.934 qpair failed and we were unable to recover it. 00:37:22.934 [2024-07-13 22:20:42.264194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.934 [2024-07-13 22:20:42.264380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.934 [2024-07-13 22:20:42.264413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.934 [2024-07-13 22:20:42.264435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.934 [2024-07-13 22:20:42.264457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.934 [2024-07-13 22:20:42.264497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.934 qpair failed and we were unable to recover it. 00:37:22.934 [2024-07-13 22:20:42.274262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.934 [2024-07-13 22:20:42.274467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.934 [2024-07-13 22:20:42.274501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.934 [2024-07-13 22:20:42.274524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.934 [2024-07-13 22:20:42.274542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.934 [2024-07-13 22:20:42.274580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.934 qpair failed and we were unable to recover it. 00:37:22.934 [2024-07-13 22:20:42.284291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.934 [2024-07-13 22:20:42.284480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.934 [2024-07-13 22:20:42.284514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.934 [2024-07-13 22:20:42.284536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.934 [2024-07-13 22:20:42.284554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.934 [2024-07-13 22:20:42.284592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.934 qpair failed and we were unable to recover it. 00:37:22.934 [2024-07-13 22:20:42.294246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.934 [2024-07-13 22:20:42.294421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.934 [2024-07-13 22:20:42.294455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.934 [2024-07-13 22:20:42.294477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.934 [2024-07-13 22:20:42.294495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.934 [2024-07-13 22:20:42.294533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.934 qpair failed and we were unable to recover it. 00:37:22.934 [2024-07-13 22:20:42.304369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.934 [2024-07-13 22:20:42.304615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.934 [2024-07-13 22:20:42.304647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.934 [2024-07-13 22:20:42.304669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.934 [2024-07-13 22:20:42.304686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.934 [2024-07-13 22:20:42.304740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.934 qpair failed and we were unable to recover it. 00:37:22.934 [2024-07-13 22:20:42.314307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.935 [2024-07-13 22:20:42.314478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.935 [2024-07-13 22:20:42.314512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.935 [2024-07-13 22:20:42.314535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.935 [2024-07-13 22:20:42.314553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.935 [2024-07-13 22:20:42.314592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.935 qpair failed and we were unable to recover it. 00:37:22.935 [2024-07-13 22:20:42.324394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:22.935 [2024-07-13 22:20:42.324597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:22.935 [2024-07-13 22:20:42.324631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:22.935 [2024-07-13 22:20:42.324670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:22.935 [2024-07-13 22:20:42.324700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:22.935 [2024-07-13 22:20:42.324747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:22.935 qpair failed and we were unable to recover it. 00:37:23.196 [2024-07-13 22:20:42.334428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.197 [2024-07-13 22:20:42.334603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.197 [2024-07-13 22:20:42.334652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.197 [2024-07-13 22:20:42.334674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.197 [2024-07-13 22:20:42.334691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.197 [2024-07-13 22:20:42.334745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.197 qpair failed and we were unable to recover it. 00:37:23.197 [2024-07-13 22:20:42.344382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.197 [2024-07-13 22:20:42.344558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.197 [2024-07-13 22:20:42.344591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.197 [2024-07-13 22:20:42.344615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.197 [2024-07-13 22:20:42.344633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.197 [2024-07-13 22:20:42.344672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.197 qpair failed and we were unable to recover it. 00:37:23.197 [2024-07-13 22:20:42.354456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.197 [2024-07-13 22:20:42.354629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.197 [2024-07-13 22:20:42.354678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.197 [2024-07-13 22:20:42.354706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.197 [2024-07-13 22:20:42.354724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.197 [2024-07-13 22:20:42.354777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.197 qpair failed and we were unable to recover it. 00:37:23.197 [2024-07-13 22:20:42.364495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.197 [2024-07-13 22:20:42.364686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.197 [2024-07-13 22:20:42.364719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.197 [2024-07-13 22:20:42.364741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.197 [2024-07-13 22:20:42.364759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.197 [2024-07-13 22:20:42.364797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.197 qpair failed and we were unable to recover it. 00:37:23.197 [2024-07-13 22:20:42.374455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.197 [2024-07-13 22:20:42.374652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.197 [2024-07-13 22:20:42.374691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.197 [2024-07-13 22:20:42.374713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.197 [2024-07-13 22:20:42.374730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.197 [2024-07-13 22:20:42.374769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.197 qpair failed and we were unable to recover it. 00:37:23.197 [2024-07-13 22:20:42.384588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.197 [2024-07-13 22:20:42.384789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.197 [2024-07-13 22:20:42.384822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.197 [2024-07-13 22:20:42.384843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.197 [2024-07-13 22:20:42.384890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.197 [2024-07-13 22:20:42.384932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.197 qpair failed and we were unable to recover it. 00:37:23.197 [2024-07-13 22:20:42.394559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.197 [2024-07-13 22:20:42.394750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.197 [2024-07-13 22:20:42.394801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.197 [2024-07-13 22:20:42.394824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.197 [2024-07-13 22:20:42.394842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.197 [2024-07-13 22:20:42.394910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.197 qpair failed and we were unable to recover it. 00:37:23.197 [2024-07-13 22:20:42.404590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.197 [2024-07-13 22:20:42.404828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.197 [2024-07-13 22:20:42.404862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.197 [2024-07-13 22:20:42.404897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.197 [2024-07-13 22:20:42.404915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.197 [2024-07-13 22:20:42.404954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.197 qpair failed and we were unable to recover it. 00:37:23.197 [2024-07-13 22:20:42.414614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.197 [2024-07-13 22:20:42.414795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.197 [2024-07-13 22:20:42.414844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.197 [2024-07-13 22:20:42.414873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.197 [2024-07-13 22:20:42.414908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.197 [2024-07-13 22:20:42.414949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.197 qpair failed and we were unable to recover it. 00:37:23.197 [2024-07-13 22:20:42.424631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.197 [2024-07-13 22:20:42.424810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.197 [2024-07-13 22:20:42.424844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.197 [2024-07-13 22:20:42.424872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.197 [2024-07-13 22:20:42.424892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.197 [2024-07-13 22:20:42.424931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.197 qpair failed and we were unable to recover it. 00:37:23.197 [2024-07-13 22:20:42.434674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.197 [2024-07-13 22:20:42.434860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.197 [2024-07-13 22:20:42.434900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.197 [2024-07-13 22:20:42.434923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.197 [2024-07-13 22:20:42.434941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.197 [2024-07-13 22:20:42.434980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.197 qpair failed and we were unable to recover it. 00:37:23.197 [2024-07-13 22:20:42.444769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.198 [2024-07-13 22:20:42.444965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.198 [2024-07-13 22:20:42.445004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.198 [2024-07-13 22:20:42.445027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.198 [2024-07-13 22:20:42.445045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.198 [2024-07-13 22:20:42.445083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.198 qpair failed and we were unable to recover it. 00:37:23.198 [2024-07-13 22:20:42.454666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.198 [2024-07-13 22:20:42.454840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.198 [2024-07-13 22:20:42.454880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.198 [2024-07-13 22:20:42.454904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.198 [2024-07-13 22:20:42.454922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.198 [2024-07-13 22:20:42.454960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.198 qpair failed and we were unable to recover it. 00:37:23.198 [2024-07-13 22:20:42.464795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.198 [2024-07-13 22:20:42.465024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.198 [2024-07-13 22:20:42.465058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.198 [2024-07-13 22:20:42.465080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.198 [2024-07-13 22:20:42.465097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.198 [2024-07-13 22:20:42.465137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.198 qpair failed and we were unable to recover it. 00:37:23.198 [2024-07-13 22:20:42.474742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.198 [2024-07-13 22:20:42.474925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.198 [2024-07-13 22:20:42.474958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.198 [2024-07-13 22:20:42.474979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.198 [2024-07-13 22:20:42.474997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.198 [2024-07-13 22:20:42.475036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.198 qpair failed and we were unable to recover it. 00:37:23.198 [2024-07-13 22:20:42.484861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.198 [2024-07-13 22:20:42.485105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.198 [2024-07-13 22:20:42.485139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.198 [2024-07-13 22:20:42.485161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.198 [2024-07-13 22:20:42.485178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.198 [2024-07-13 22:20:42.485222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.198 qpair failed and we were unable to recover it. 00:37:23.198 [2024-07-13 22:20:42.494912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.198 [2024-07-13 22:20:42.495083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.198 [2024-07-13 22:20:42.495117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.198 [2024-07-13 22:20:42.495138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.198 [2024-07-13 22:20:42.495171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.198 [2024-07-13 22:20:42.495209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.198 qpair failed and we were unable to recover it. 00:37:23.198 [2024-07-13 22:20:42.504859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.198 [2024-07-13 22:20:42.505048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.198 [2024-07-13 22:20:42.505081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.198 [2024-07-13 22:20:42.505103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.198 [2024-07-13 22:20:42.505121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.198 [2024-07-13 22:20:42.505159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.198 qpair failed and we were unable to recover it. 00:37:23.198 [2024-07-13 22:20:42.514913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.198 [2024-07-13 22:20:42.515087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.198 [2024-07-13 22:20:42.515120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.198 [2024-07-13 22:20:42.515142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.198 [2024-07-13 22:20:42.515160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.198 [2024-07-13 22:20:42.515199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.198 qpair failed and we were unable to recover it. 00:37:23.198 [2024-07-13 22:20:42.525000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.198 [2024-07-13 22:20:42.525186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.198 [2024-07-13 22:20:42.525220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.198 [2024-07-13 22:20:42.525242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.198 [2024-07-13 22:20:42.525273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.198 [2024-07-13 22:20:42.525313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.198 qpair failed and we were unable to recover it. 00:37:23.198 [2024-07-13 22:20:42.534976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.198 [2024-07-13 22:20:42.535166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.198 [2024-07-13 22:20:42.535218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.198 [2024-07-13 22:20:42.535241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.198 [2024-07-13 22:20:42.535258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.198 [2024-07-13 22:20:42.535310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.198 qpair failed and we were unable to recover it. 00:37:23.198 [2024-07-13 22:20:42.545001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.198 [2024-07-13 22:20:42.545187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.198 [2024-07-13 22:20:42.545220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.198 [2024-07-13 22:20:42.545242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.198 [2024-07-13 22:20:42.545259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.198 [2024-07-13 22:20:42.545298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.198 qpair failed and we were unable to recover it. 00:37:23.198 [2024-07-13 22:20:42.554995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.198 [2024-07-13 22:20:42.555172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.198 [2024-07-13 22:20:42.555205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.198 [2024-07-13 22:20:42.555227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.198 [2024-07-13 22:20:42.555244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.198 [2024-07-13 22:20:42.555283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.198 qpair failed and we were unable to recover it. 00:37:23.198 [2024-07-13 22:20:42.565131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.198 [2024-07-13 22:20:42.565391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.199 [2024-07-13 22:20:42.565424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.199 [2024-07-13 22:20:42.565445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.199 [2024-07-13 22:20:42.565461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.199 [2024-07-13 22:20:42.565522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.199 qpair failed and we were unable to recover it. 00:37:23.199 [2024-07-13 22:20:42.575096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.199 [2024-07-13 22:20:42.575289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.199 [2024-07-13 22:20:42.575337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.199 [2024-07-13 22:20:42.575359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.199 [2024-07-13 22:20:42.575380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.199 [2024-07-13 22:20:42.575434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.199 qpair failed and we were unable to recover it. 00:37:23.199 [2024-07-13 22:20:42.585128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.199 [2024-07-13 22:20:42.585346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.199 [2024-07-13 22:20:42.585378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.199 [2024-07-13 22:20:42.585399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.199 [2024-07-13 22:20:42.585416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.199 [2024-07-13 22:20:42.585469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.199 qpair failed and we were unable to recover it. 00:37:23.459 [2024-07-13 22:20:42.595168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.459 [2024-07-13 22:20:42.595334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.459 [2024-07-13 22:20:42.595368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.459 [2024-07-13 22:20:42.595390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.459 [2024-07-13 22:20:42.595407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.459 [2024-07-13 22:20:42.595446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.459 qpair failed and we were unable to recover it. 00:37:23.459 [2024-07-13 22:20:42.605214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.459 [2024-07-13 22:20:42.605441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.459 [2024-07-13 22:20:42.605474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.459 [2024-07-13 22:20:42.605495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.459 [2024-07-13 22:20:42.605512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.459 [2024-07-13 22:20:42.605566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.459 qpair failed and we were unable to recover it. 00:37:23.459 [2024-07-13 22:20:42.615159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.459 [2024-07-13 22:20:42.615345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.459 [2024-07-13 22:20:42.615378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.459 [2024-07-13 22:20:42.615400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.459 [2024-07-13 22:20:42.615417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.459 [2024-07-13 22:20:42.615456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.459 qpair failed and we were unable to recover it. 00:37:23.459 [2024-07-13 22:20:42.625405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.459 [2024-07-13 22:20:42.625649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.459 [2024-07-13 22:20:42.625684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.459 [2024-07-13 22:20:42.625709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.459 [2024-07-13 22:20:42.625726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.459 [2024-07-13 22:20:42.625780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.459 qpair failed and we were unable to recover it. 00:37:23.459 [2024-07-13 22:20:42.635212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.459 [2024-07-13 22:20:42.635390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.459 [2024-07-13 22:20:42.635423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.459 [2024-07-13 22:20:42.635445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.459 [2024-07-13 22:20:42.635463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.459 [2024-07-13 22:20:42.635501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.459 qpair failed and we were unable to recover it. 00:37:23.459 [2024-07-13 22:20:42.645297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.459 [2024-07-13 22:20:42.645507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.459 [2024-07-13 22:20:42.645540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.459 [2024-07-13 22:20:42.645561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.459 [2024-07-13 22:20:42.645579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.459 [2024-07-13 22:20:42.645632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.459 qpair failed and we were unable to recover it. 00:37:23.459 [2024-07-13 22:20:42.655308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.459 [2024-07-13 22:20:42.655545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.459 [2024-07-13 22:20:42.655578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.459 [2024-07-13 22:20:42.655604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.459 [2024-07-13 22:20:42.655622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.459 [2024-07-13 22:20:42.655675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.459 qpair failed and we were unable to recover it. 00:37:23.459 [2024-07-13 22:20:42.665294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.459 [2024-07-13 22:20:42.665476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.459 [2024-07-13 22:20:42.665510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.459 [2024-07-13 22:20:42.665532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.459 [2024-07-13 22:20:42.665554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.460 [2024-07-13 22:20:42.665595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.460 qpair failed and we were unable to recover it. 00:37:23.460 [2024-07-13 22:20:42.675357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.460 [2024-07-13 22:20:42.675533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.460 [2024-07-13 22:20:42.675565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.460 [2024-07-13 22:20:42.675587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.460 [2024-07-13 22:20:42.675604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.460 [2024-07-13 22:20:42.675643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.460 qpair failed and we were unable to recover it. 00:37:23.460 [2024-07-13 22:20:42.685427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.460 [2024-07-13 22:20:42.685660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.460 [2024-07-13 22:20:42.685693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.460 [2024-07-13 22:20:42.685715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.460 [2024-07-13 22:20:42.685732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.460 [2024-07-13 22:20:42.685771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.460 qpair failed and we were unable to recover it. 00:37:23.460 [2024-07-13 22:20:42.695416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.460 [2024-07-13 22:20:42.695592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.460 [2024-07-13 22:20:42.695641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.460 [2024-07-13 22:20:42.695663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.460 [2024-07-13 22:20:42.695680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.460 [2024-07-13 22:20:42.695734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.460 qpair failed and we were unable to recover it. 00:37:23.460 [2024-07-13 22:20:42.705440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.460 [2024-07-13 22:20:42.705624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.460 [2024-07-13 22:20:42.705657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.460 [2024-07-13 22:20:42.705679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.460 [2024-07-13 22:20:42.705697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.460 [2024-07-13 22:20:42.705735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.460 qpair failed and we were unable to recover it. 00:37:23.460 [2024-07-13 22:20:42.715518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.460 [2024-07-13 22:20:42.715735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.460 [2024-07-13 22:20:42.715768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.460 [2024-07-13 22:20:42.715789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.460 [2024-07-13 22:20:42.715806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.460 [2024-07-13 22:20:42.715845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.460 qpair failed and we were unable to recover it. 00:37:23.460 [2024-07-13 22:20:42.725505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.460 [2024-07-13 22:20:42.725707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.460 [2024-07-13 22:20:42.725739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.460 [2024-07-13 22:20:42.725760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.460 [2024-07-13 22:20:42.725777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.460 [2024-07-13 22:20:42.725816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.460 qpair failed and we were unable to recover it. 00:37:23.460 [2024-07-13 22:20:42.735506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.460 [2024-07-13 22:20:42.735684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.460 [2024-07-13 22:20:42.735717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.460 [2024-07-13 22:20:42.735740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.460 [2024-07-13 22:20:42.735757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.460 [2024-07-13 22:20:42.735796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.460 qpair failed and we were unable to recover it. 00:37:23.460 [2024-07-13 22:20:42.745531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.460 [2024-07-13 22:20:42.745710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.460 [2024-07-13 22:20:42.745743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.460 [2024-07-13 22:20:42.745765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.460 [2024-07-13 22:20:42.745782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.460 [2024-07-13 22:20:42.745821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.460 qpair failed and we were unable to recover it. 00:37:23.460 [2024-07-13 22:20:42.755590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.460 [2024-07-13 22:20:42.755805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.460 [2024-07-13 22:20:42.755837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.460 [2024-07-13 22:20:42.755891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.460 [2024-07-13 22:20:42.755912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.460 [2024-07-13 22:20:42.755953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.460 qpair failed and we were unable to recover it. 00:37:23.460 [2024-07-13 22:20:42.765653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.460 [2024-07-13 22:20:42.765828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.460 [2024-07-13 22:20:42.765876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.460 [2024-07-13 22:20:42.765901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.460 [2024-07-13 22:20:42.765919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.460 [2024-07-13 22:20:42.765957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.460 qpair failed and we were unable to recover it. 00:37:23.460 [2024-07-13 22:20:42.775591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.460 [2024-07-13 22:20:42.775759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.460 [2024-07-13 22:20:42.775792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.460 [2024-07-13 22:20:42.775814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.460 [2024-07-13 22:20:42.775831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.460 [2024-07-13 22:20:42.775879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.460 qpair failed and we were unable to recover it. 00:37:23.460 [2024-07-13 22:20:42.785732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.460 [2024-07-13 22:20:42.785942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.460 [2024-07-13 22:20:42.785990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.460 [2024-07-13 22:20:42.786015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.460 [2024-07-13 22:20:42.786032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.460 [2024-07-13 22:20:42.786072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.460 qpair failed and we were unable to recover it. 00:37:23.460 [2024-07-13 22:20:42.795701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.460 [2024-07-13 22:20:42.795901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.460 [2024-07-13 22:20:42.795935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.460 [2024-07-13 22:20:42.795958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.460 [2024-07-13 22:20:42.795975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.460 [2024-07-13 22:20:42.796015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.460 qpair failed and we were unable to recover it. 00:37:23.460 [2024-07-13 22:20:42.805767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.460 [2024-07-13 22:20:42.805951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.460 [2024-07-13 22:20:42.805985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.460 [2024-07-13 22:20:42.806007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.460 [2024-07-13 22:20:42.806025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.460 [2024-07-13 22:20:42.806063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.460 qpair failed and we were unable to recover it. 00:37:23.460 [2024-07-13 22:20:42.815785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.461 [2024-07-13 22:20:42.815992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.461 [2024-07-13 22:20:42.816025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.461 [2024-07-13 22:20:42.816048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.461 [2024-07-13 22:20:42.816065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.461 [2024-07-13 22:20:42.816103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.461 qpair failed and we were unable to recover it. 00:37:23.461 [2024-07-13 22:20:42.825767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.461 [2024-07-13 22:20:42.825963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.461 [2024-07-13 22:20:42.825996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.461 [2024-07-13 22:20:42.826018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.461 [2024-07-13 22:20:42.826036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.461 [2024-07-13 22:20:42.826075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.461 qpair failed and we were unable to recover it. 00:37:23.461 [2024-07-13 22:20:42.835802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.461 [2024-07-13 22:20:42.836009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.461 [2024-07-13 22:20:42.836042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.461 [2024-07-13 22:20:42.836065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.461 [2024-07-13 22:20:42.836083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.461 [2024-07-13 22:20:42.836121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.461 qpair failed and we were unable to recover it. 00:37:23.461 [2024-07-13 22:20:42.845862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.461 [2024-07-13 22:20:42.846059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.461 [2024-07-13 22:20:42.846098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.461 [2024-07-13 22:20:42.846123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.461 [2024-07-13 22:20:42.846142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001f2780 00:37:23.461 [2024-07-13 22:20:42.846183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:37:23.461 qpair failed and we were unable to recover it. 00:37:23.720 [2024-07-13 22:20:42.855877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.720 [2024-07-13 22:20:42.856052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.720 [2024-07-13 22:20:42.856093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.720 [2024-07-13 22:20:42.856118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.720 [2024-07-13 22:20:42.856138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:37:23.720 [2024-07-13 22:20:42.856180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.720 qpair failed and we were unable to recover it. 00:37:23.720 [2024-07-13 22:20:42.865906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.720 [2024-07-13 22:20:42.866108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.720 [2024-07-13 22:20:42.866144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.720 [2024-07-13 22:20:42.866169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.720 [2024-07-13 22:20:42.866188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x6150001ffe80 00:37:23.720 [2024-07-13 22:20:42.866237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:37:23.720 qpair failed and we were unable to recover it. 00:37:23.720 [2024-07-13 22:20:42.875945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.720 [2024-07-13 22:20:42.876121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.720 [2024-07-13 22:20:42.876163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.720 [2024-07-13 22:20:42.876189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.720 [2024-07-13 22:20:42.876208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500021ff00 00:37:23.720 [2024-07-13 22:20:42.876249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:23.720 qpair failed and we were unable to recover it. 00:37:23.720 [2024-07-13 22:20:42.885985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:37:23.720 [2024-07-13 22:20:42.886172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:37:23.720 [2024-07-13 22:20:42.886207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:37:23.720 [2024-07-13 22:20:42.886229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:37:23.720 [2024-07-13 22:20:42.886248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x61500021ff00 00:37:23.720 [2024-07-13 22:20:42.886294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:37:23.720 qpair failed and we were unable to recover it. 00:37:23.720 [2024-07-13 22:20:42.886666] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:37:23.720 A controller has encountered a failure and is being reset. 00:37:23.720 [2024-07-13 22:20:42.886796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f2280 (9): Bad file descriptor 00:37:23.720 Controller properly reset. 00:37:23.720 Initializing NVMe Controllers 00:37:23.720 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:23.720 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:23.720 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:37:23.720 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:37:23.720 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:37:23.720 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:37:23.720 Initialization complete. Launching workers. 00:37:23.720 Starting thread on core 1 00:37:23.720 Starting thread on core 2 00:37:23.720 Starting thread on core 3 00:37:23.720 Starting thread on core 0 00:37:23.720 22:20:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:37:23.720 00:37:23.720 real 0m11.588s 00:37:23.720 user 0m20.572s 00:37:23.720 sys 0m5.423s 00:37:23.720 22:20:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:23.720 22:20:42 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:37:23.720 ************************************ 00:37:23.720 END TEST nvmf_target_disconnect_tc2 00:37:23.720 ************************************ 00:37:23.720 22:20:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:37:23.720 22:20:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:37:23.720 22:20:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:37:23.720 22:20:42 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:37:23.720 22:20:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:23.720 22:20:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:37:23.720 22:20:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:23.720 22:20:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:37:23.720 22:20:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:23.720 22:20:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:23.720 rmmod nvme_tcp 00:37:23.720 rmmod nvme_fabrics 00:37:23.720 rmmod nvme_keyring 00:37:23.720 22:20:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:23.720 22:20:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:37:23.720 22:20:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:37:23.720 22:20:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 54031 ']' 00:37:23.720 22:20:43 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 54031 00:37:23.720 22:20:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 54031 ']' 00:37:23.720 22:20:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 54031 00:37:23.720 22:20:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:37:23.720 22:20:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:23.720 22:20:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 54031 00:37:23.720 22:20:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:37:23.720 22:20:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:37:23.720 22:20:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 54031' 00:37:23.720 killing process with pid 54031 00:37:23.720 22:20:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 54031 00:37:23.720 22:20:43 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 54031 00:37:25.101 22:20:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:25.101 22:20:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:25.101 22:20:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:25.101 22:20:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:25.101 22:20:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:25.101 22:20:44 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:25.101 22:20:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:37:25.101 22:20:44 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:27.641 22:20:46 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:37:27.641 00:37:27.641 real 0m17.480s 00:37:27.641 user 0m48.550s 00:37:27.641 sys 0m7.716s 00:37:27.641 22:20:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:27.641 22:20:46 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:27.641 ************************************ 00:37:27.641 END TEST nvmf_target_disconnect 00:37:27.641 ************************************ 00:37:27.641 22:20:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:37:27.641 22:20:46 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:37:27.641 22:20:46 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:27.641 22:20:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:27.641 22:20:46 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:37:27.641 00:37:27.641 real 28m58.556s 00:37:27.641 user 77m48.026s 00:37:27.641 sys 6m3.868s 00:37:27.641 22:20:46 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:27.641 22:20:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:27.641 ************************************ 00:37:27.641 END TEST nvmf_tcp 00:37:27.641 ************************************ 00:37:27.641 22:20:46 -- common/autotest_common.sh@1142 -- # return 0 00:37:27.641 22:20:46 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:37:27.641 22:20:46 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:27.641 22:20:46 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:27.641 22:20:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:27.641 22:20:46 -- common/autotest_common.sh@10 -- # set +x 00:37:27.641 ************************************ 00:37:27.641 START TEST spdkcli_nvmf_tcp 00:37:27.641 ************************************ 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:37:27.641 * Looking for test storage... 00:37:27.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:27.641 22:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:37:27.642 22:20:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=55354 00:37:27.642 22:20:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:37:27.642 22:20:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 55354 00:37:27.642 22:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 55354 ']' 00:37:27.642 22:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:27.642 22:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:27.642 22:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:27.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:27.642 22:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:27.642 22:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:27.642 [2024-07-13 22:20:46.706013] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:27.642 [2024-07-13 22:20:46.706166] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55354 ] 00:37:27.642 EAL: No free 2048 kB hugepages reported on node 1 00:37:27.642 [2024-07-13 22:20:46.832593] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:27.900 [2024-07-13 22:20:47.088960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:27.900 [2024-07-13 22:20:47.088966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:28.466 22:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:28.466 22:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:37:28.466 22:20:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:37:28.466 22:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:28.466 22:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:28.466 22:20:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:37:28.466 22:20:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:37:28.466 22:20:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:37:28.466 22:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:28.466 22:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:28.466 22:20:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:37:28.466 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:37:28.466 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:37:28.466 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:37:28.466 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:37:28.466 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:37:28.466 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:37:28.466 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:28.466 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:37:28.466 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:37:28.466 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:28.466 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:28.466 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:37:28.466 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:28.466 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:28.466 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:37:28.466 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:37:28.466 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:28.466 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:37:28.466 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:28.466 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:37:28.466 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:37:28.466 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:37:28.466 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:37:28.466 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:37:28.466 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:37:28.466 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:37:28.466 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:37:28.466 ' 00:37:30.994 [2024-07-13 22:20:50.355331] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:32.370 [2024-07-13 22:20:51.580918] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:37:34.900 [2024-07-13 22:20:53.840314] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:37:36.803 [2024-07-13 22:20:55.814860] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:37:38.180 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:37:38.180 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:37:38.180 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:37:38.180 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:37:38.180 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:37:38.180 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:37:38.180 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:37:38.180 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:38.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:37:38.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:37:38.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:38.180 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:38.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:37:38.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:38.180 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:38.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:37:38.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:37:38.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:38.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:37:38.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:38.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:37:38.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:37:38.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:37:38.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:37:38.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:37:38.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:37:38.180 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:37:38.180 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:37:38.180 22:20:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:37:38.180 22:20:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:38.180 22:20:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:38.180 22:20:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:37:38.180 22:20:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:38.180 22:20:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:38.180 22:20:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:37:38.180 22:20:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:37:38.750 22:20:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:37:38.750 22:20:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:37:38.750 22:20:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:37:38.750 22:20:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:38.750 22:20:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:38.750 22:20:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:37:38.750 22:20:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:38.750 22:20:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:38.750 22:20:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:37:38.750 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:37:38.750 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:38.750 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:37:38.750 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:37:38.750 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:37:38.750 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:37:38.750 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:37:38.750 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:37:38.750 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:37:38.750 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:37:38.750 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:37:38.750 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:37:38.750 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:37:38.750 ' 00:37:45.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:37:45.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:37:45.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:45.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:37:45.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:37:45.383 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:37:45.383 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:37:45.383 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:45.383 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:37:45.383 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:37:45.383 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:37:45.383 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:37:45.383 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:37:45.383 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:37:45.383 22:21:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:37:45.383 22:21:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:45.383 22:21:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:45.384 22:21:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 55354 00:37:45.384 22:21:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 55354 ']' 00:37:45.384 22:21:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 55354 00:37:45.384 22:21:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:37:45.384 22:21:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:45.384 22:21:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 55354 00:37:45.384 22:21:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:45.384 22:21:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:45.384 22:21:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 55354' 00:37:45.384 killing process with pid 55354 00:37:45.384 22:21:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 55354 00:37:45.384 22:21:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 55354 00:37:45.642 22:21:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:37:45.642 22:21:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:37:45.642 22:21:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 55354 ']' 00:37:45.642 22:21:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 55354 00:37:45.642 22:21:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 55354 ']' 00:37:45.642 22:21:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 55354 00:37:45.642 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (55354) - No such process 00:37:45.642 22:21:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 55354 is not found' 00:37:45.642 Process with pid 55354 is not found 00:37:45.642 22:21:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:37:45.642 22:21:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:37:45.642 22:21:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:37:45.642 00:37:45.642 real 0m18.437s 00:37:45.642 user 0m37.949s 00:37:45.642 sys 0m1.031s 00:37:45.642 22:21:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:45.642 22:21:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:45.642 ************************************ 00:37:45.642 END TEST spdkcli_nvmf_tcp 00:37:45.642 ************************************ 00:37:45.642 22:21:05 -- common/autotest_common.sh@1142 -- # return 0 00:37:45.642 22:21:05 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:45.642 22:21:05 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:45.642 22:21:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:45.642 22:21:05 -- common/autotest_common.sh@10 -- # set +x 00:37:45.642 ************************************ 00:37:45.642 START TEST nvmf_identify_passthru 00:37:45.642 ************************************ 00:37:45.642 22:21:05 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:37:45.900 * Looking for test storage... 00:37:45.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:45.900 22:21:05 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:45.900 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:37:45.900 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:45.900 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:45.900 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:45.900 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:45.900 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:45.900 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:45.900 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:45.900 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:45.900 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:45.900 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:45.900 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:45.900 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:45.900 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:45.900 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:45.900 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:45.900 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:45.900 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:45.900 22:21:05 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:45.900 22:21:05 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:45.900 22:21:05 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:45.900 22:21:05 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:45.900 22:21:05 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:45.900 22:21:05 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:45.900 22:21:05 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:45.900 22:21:05 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:45.900 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:37:45.900 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:45.900 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:45.901 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:45.901 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:45.901 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:45.901 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:45.901 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:45.901 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:45.901 22:21:05 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:45.901 22:21:05 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:45.901 22:21:05 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:45.901 22:21:05 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:45.901 22:21:05 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:45.901 22:21:05 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:45.901 22:21:05 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:45.901 22:21:05 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:37:45.901 22:21:05 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:45.901 22:21:05 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:37:45.901 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:45.901 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:45.901 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:45.901 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:45.901 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:45.901 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:45.901 22:21:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:45.901 22:21:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:45.901 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:37:45.901 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:37:45.901 22:21:05 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:37:45.901 22:21:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:47.799 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:47.799 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:37:47.799 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:37:47.799 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:37:47.799 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:37:47.799 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:37:47.799 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:37:47.799 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:37:47.799 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:37:47.799 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:37:47.799 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:37:47.799 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:37:47.799 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:37:47.799 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:37:47.799 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:37:47.799 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:47.799 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:47.799 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:47.799 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:47.799 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:47.799 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:47.799 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:47.799 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:47.799 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:47.799 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:47.799 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:47.799 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:47.800 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:47.800 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:47.800 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:47.800 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:37:47.800 22:21:06 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:47.800 22:21:07 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:47.800 22:21:07 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:47.800 22:21:07 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:37:47.800 22:21:07 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:47.800 22:21:07 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:47.800 22:21:07 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:47.800 22:21:07 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:37:47.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:47.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:37:47.800 00:37:47.800 --- 10.0.0.2 ping statistics --- 00:37:47.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:47.800 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:37:47.800 22:21:07 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:47.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:47.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:37:47.800 00:37:47.800 --- 10.0.0.1 ping statistics --- 00:37:47.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:47.800 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:37:47.800 22:21:07 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:47.800 22:21:07 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:37:47.800 22:21:07 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:37:47.800 22:21:07 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:47.800 22:21:07 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:47.800 22:21:07 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:47.800 22:21:07 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:47.800 22:21:07 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:47.800 22:21:07 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:47.800 22:21:07 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:37:47.800 22:21:07 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:47.800 22:21:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:47.800 22:21:07 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:37:47.800 22:21:07 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:37:47.800 22:21:07 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:37:47.800 22:21:07 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:37:47.800 22:21:07 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:37:47.800 22:21:07 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:37:47.800 22:21:07 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:37:47.800 22:21:07 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:47.800 22:21:07 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:47.800 22:21:07 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:37:48.059 22:21:07 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:37:48.059 22:21:07 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:37:48.059 22:21:07 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:37:48.059 22:21:07 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:37:48.059 22:21:07 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:37:48.059 22:21:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:37:48.059 22:21:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:37:48.059 22:21:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:37:48.059 EAL: No free 2048 kB hugepages reported on node 1 00:37:52.250 22:21:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:37:52.250 22:21:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:37:52.250 22:21:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:37:52.250 22:21:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:37:52.509 EAL: No free 2048 kB hugepages reported on node 1 00:37:56.706 22:21:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:37:56.706 22:21:15 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:37:56.706 22:21:15 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:56.706 22:21:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:56.706 22:21:15 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:37:56.706 22:21:15 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:56.706 22:21:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:56.706 22:21:15 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=60841 00:37:56.706 22:21:15 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:37:56.706 22:21:15 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:56.706 22:21:15 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 60841 00:37:56.706 22:21:15 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 60841 ']' 00:37:56.706 22:21:15 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:56.706 22:21:15 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:56.706 22:21:15 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:56.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:56.706 22:21:15 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:56.706 22:21:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:56.706 [2024-07-13 22:21:16.013486] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:56.706 [2024-07-13 22:21:16.013621] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:56.706 EAL: No free 2048 kB hugepages reported on node 1 00:37:56.964 [2024-07-13 22:21:16.147260] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:57.222 [2024-07-13 22:21:16.402047] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:57.222 [2024-07-13 22:21:16.402115] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:57.222 [2024-07-13 22:21:16.402143] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:57.222 [2024-07-13 22:21:16.402164] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:57.222 [2024-07-13 22:21:16.402185] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:57.222 [2024-07-13 22:21:16.402308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:57.222 [2024-07-13 22:21:16.402379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:57.222 [2024-07-13 22:21:16.402473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:57.222 [2024-07-13 22:21:16.402482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:37:57.789 22:21:16 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:57.789 22:21:16 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:37:57.789 22:21:16 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:37:57.789 22:21:16 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:57.789 22:21:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:57.789 INFO: Log level set to 20 00:37:57.789 INFO: Requests: 00:37:57.789 { 00:37:57.789 "jsonrpc": "2.0", 00:37:57.789 "method": "nvmf_set_config", 00:37:57.789 "id": 1, 00:37:57.789 "params": { 00:37:57.789 "admin_cmd_passthru": { 00:37:57.789 "identify_ctrlr": true 00:37:57.789 } 00:37:57.789 } 00:37:57.789 } 00:37:57.789 00:37:57.789 INFO: response: 00:37:57.789 { 00:37:57.789 "jsonrpc": "2.0", 00:37:57.789 "id": 1, 00:37:57.789 "result": true 00:37:57.789 } 00:37:57.789 00:37:57.789 22:21:16 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:57.789 22:21:16 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:37:57.789 22:21:16 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:57.789 22:21:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:57.789 INFO: Setting log level to 20 00:37:57.789 INFO: Setting log level to 20 00:37:57.789 INFO: Log level set to 20 00:37:57.789 INFO: Log level set to 20 00:37:57.789 INFO: Requests: 00:37:57.789 { 00:37:57.790 "jsonrpc": "2.0", 00:37:57.790 "method": "framework_start_init", 00:37:57.790 "id": 1 00:37:57.790 } 00:37:57.790 00:37:57.790 INFO: Requests: 00:37:57.790 { 00:37:57.790 "jsonrpc": "2.0", 00:37:57.790 "method": "framework_start_init", 00:37:57.790 "id": 1 00:37:57.790 } 00:37:57.790 00:37:58.050 [2024-07-13 22:21:17.309753] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:37:58.050 INFO: response: 00:37:58.050 { 00:37:58.050 "jsonrpc": "2.0", 00:37:58.050 "id": 1, 00:37:58.050 "result": true 00:37:58.050 } 00:37:58.050 00:37:58.050 INFO: response: 00:37:58.050 { 00:37:58.050 "jsonrpc": "2.0", 00:37:58.050 "id": 1, 00:37:58.050 "result": true 00:37:58.050 } 00:37:58.050 00:37:58.050 22:21:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:58.050 22:21:17 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:58.050 22:21:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:58.050 22:21:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:58.050 INFO: Setting log level to 40 00:37:58.050 INFO: Setting log level to 40 00:37:58.050 INFO: Setting log level to 40 00:37:58.050 [2024-07-13 22:21:17.322670] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:58.050 22:21:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:58.050 22:21:17 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:37:58.050 22:21:17 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:58.050 22:21:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:58.050 22:21:17 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:37:58.050 22:21:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:58.050 22:21:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:01.373 Nvme0n1 00:38:01.373 22:21:20 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:01.373 22:21:20 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:38:01.373 22:21:20 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:01.373 22:21:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:01.373 22:21:20 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:01.373 22:21:20 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:38:01.373 22:21:20 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:01.373 22:21:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:01.373 22:21:20 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:01.373 22:21:20 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:01.373 22:21:20 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:01.373 22:21:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:01.373 [2024-07-13 22:21:20.280156] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:01.373 22:21:20 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:01.373 22:21:20 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:38:01.373 22:21:20 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:01.373 22:21:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:01.373 [ 00:38:01.373 { 00:38:01.373 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:38:01.373 "subtype": "Discovery", 00:38:01.373 "listen_addresses": [], 00:38:01.373 "allow_any_host": true, 00:38:01.373 "hosts": [] 00:38:01.373 }, 00:38:01.373 { 00:38:01.373 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:38:01.373 "subtype": "NVMe", 00:38:01.373 "listen_addresses": [ 00:38:01.373 { 00:38:01.373 "trtype": "TCP", 00:38:01.373 "adrfam": "IPv4", 00:38:01.373 "traddr": "10.0.0.2", 00:38:01.373 "trsvcid": "4420" 00:38:01.373 } 00:38:01.373 ], 00:38:01.373 "allow_any_host": true, 00:38:01.373 "hosts": [], 00:38:01.373 "serial_number": "SPDK00000000000001", 00:38:01.373 "model_number": "SPDK bdev Controller", 00:38:01.373 "max_namespaces": 1, 00:38:01.373 "min_cntlid": 1, 00:38:01.373 "max_cntlid": 65519, 00:38:01.373 "namespaces": [ 00:38:01.373 { 00:38:01.373 "nsid": 1, 00:38:01.373 "bdev_name": "Nvme0n1", 00:38:01.373 "name": "Nvme0n1", 00:38:01.373 "nguid": "F38FB2E81B814681A08242E85D0A932C", 00:38:01.373 "uuid": "f38fb2e8-1b81-4681-a082-42e85d0a932c" 00:38:01.373 } 00:38:01.373 ] 00:38:01.373 } 00:38:01.373 ] 00:38:01.373 22:21:20 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:01.373 22:21:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:01.373 22:21:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:38:01.373 22:21:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:38:01.373 EAL: No free 2048 kB hugepages reported on node 1 00:38:01.373 22:21:20 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:38:01.373 22:21:20 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:38:01.373 22:21:20 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:38:01.373 22:21:20 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:38:01.373 EAL: No free 2048 kB hugepages reported on node 1 00:38:01.373 22:21:20 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:38:01.373 22:21:20 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:38:01.373 22:21:20 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:38:01.373 22:21:20 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:01.373 22:21:20 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:01.373 22:21:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:01.632 22:21:20 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:01.632 22:21:20 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:38:01.632 22:21:20 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:38:01.632 22:21:20 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:38:01.632 22:21:20 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:38:01.632 22:21:20 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:38:01.632 22:21:20 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:38:01.632 22:21:20 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:38:01.632 22:21:20 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:38:01.632 rmmod nvme_tcp 00:38:01.632 rmmod nvme_fabrics 00:38:01.632 rmmod nvme_keyring 00:38:01.632 22:21:20 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:38:01.632 22:21:20 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:38:01.632 22:21:20 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:38:01.632 22:21:20 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 60841 ']' 00:38:01.632 22:21:20 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 60841 00:38:01.632 22:21:20 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 60841 ']' 00:38:01.632 22:21:20 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 60841 00:38:01.632 22:21:20 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:38:01.632 22:21:20 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:01.632 22:21:20 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60841 00:38:01.632 22:21:20 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:01.632 22:21:20 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:01.632 22:21:20 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60841' 00:38:01.632 killing process with pid 60841 00:38:01.632 22:21:20 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 60841 00:38:01.632 22:21:20 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 60841 00:38:04.164 22:21:23 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:38:04.164 22:21:23 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:38:04.164 22:21:23 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:38:04.164 22:21:23 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:38:04.164 22:21:23 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:38:04.164 22:21:23 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:04.164 22:21:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:04.164 22:21:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:06.697 22:21:25 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:38:06.697 00:38:06.697 real 0m20.477s 00:38:06.697 user 0m33.450s 00:38:06.697 sys 0m2.684s 00:38:06.697 22:21:25 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:06.697 22:21:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:38:06.697 ************************************ 00:38:06.697 END TEST nvmf_identify_passthru 00:38:06.697 ************************************ 00:38:06.697 22:21:25 -- common/autotest_common.sh@1142 -- # return 0 00:38:06.697 22:21:25 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:06.697 22:21:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:06.697 22:21:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:06.697 22:21:25 -- common/autotest_common.sh@10 -- # set +x 00:38:06.697 ************************************ 00:38:06.697 START TEST nvmf_dif 00:38:06.697 ************************************ 00:38:06.697 22:21:25 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:38:06.697 * Looking for test storage... 00:38:06.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:06.697 22:21:25 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:06.697 22:21:25 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:38:06.697 22:21:25 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:06.697 22:21:25 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:06.697 22:21:25 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:06.697 22:21:25 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:06.697 22:21:25 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:06.697 22:21:25 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:06.697 22:21:25 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:06.697 22:21:25 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:06.697 22:21:25 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:06.697 22:21:25 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:06.697 22:21:25 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:06.697 22:21:25 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:06.697 22:21:25 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:06.697 22:21:25 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:06.697 22:21:25 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:06.697 22:21:25 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:06.697 22:21:25 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:06.698 22:21:25 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:06.698 22:21:25 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:06.698 22:21:25 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:06.698 22:21:25 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.698 22:21:25 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.698 22:21:25 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.698 22:21:25 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:38:06.698 22:21:25 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.698 22:21:25 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:38:06.698 22:21:25 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:38:06.698 22:21:25 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:38:06.698 22:21:25 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:06.698 22:21:25 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:06.698 22:21:25 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:06.698 22:21:25 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:38:06.698 22:21:25 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:38:06.698 22:21:25 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:38:06.698 22:21:25 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:38:06.698 22:21:25 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:38:06.698 22:21:25 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:38:06.698 22:21:25 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:38:06.698 22:21:25 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:38:06.698 22:21:25 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:06.698 22:21:25 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:06.698 22:21:25 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:06.698 22:21:25 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:06.698 22:21:25 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:06.698 22:21:25 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:06.698 22:21:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:06.698 22:21:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:06.698 22:21:25 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:38:06.698 22:21:25 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:38:06.698 22:21:25 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:38:06.698 22:21:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:08.602 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:08.602 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:08.602 22:21:27 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:08.603 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:08.603 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:38:08.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:08.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:38:08.603 00:38:08.603 --- 10.0.0.2 ping statistics --- 00:38:08.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:08.603 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:08.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:08.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:38:08.603 00:38:08.603 --- 10.0.0.1 ping statistics --- 00:38:08.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:08.603 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:38:08.603 22:21:27 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:09.539 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:38:09.539 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:38:09.539 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:38:09.539 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:38:09.539 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:38:09.539 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:38:09.539 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:38:09.539 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:38:09.539 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:38:09.539 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:38:09.539 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:38:09.539 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:38:09.539 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:38:09.539 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:38:09.539 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:38:09.539 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:38:09.539 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:38:09.539 22:21:28 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:09.539 22:21:28 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:09.539 22:21:28 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:09.539 22:21:28 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:09.539 22:21:28 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:09.539 22:21:28 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:09.539 22:21:28 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:38:09.539 22:21:28 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:38:09.539 22:21:28 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:09.539 22:21:28 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:38:09.539 22:21:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:09.539 22:21:28 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=64271 00:38:09.539 22:21:28 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:38:09.539 22:21:28 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 64271 00:38:09.539 22:21:28 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 64271 ']' 00:38:09.539 22:21:28 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:09.539 22:21:28 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:09.539 22:21:28 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:09.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:09.539 22:21:28 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:09.539 22:21:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:09.799 [2024-07-13 22:21:28.978619] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:38:09.799 [2024-07-13 22:21:28.978745] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:09.799 EAL: No free 2048 kB hugepages reported on node 1 00:38:09.799 [2024-07-13 22:21:29.113950] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:10.057 [2024-07-13 22:21:29.370610] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:10.057 [2024-07-13 22:21:29.370696] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:10.057 [2024-07-13 22:21:29.370725] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:10.057 [2024-07-13 22:21:29.370750] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:10.057 [2024-07-13 22:21:29.370772] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:10.057 [2024-07-13 22:21:29.370827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:10.624 22:21:29 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:10.624 22:21:29 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:38:10.624 22:21:29 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:10.624 22:21:29 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:38:10.624 22:21:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:10.624 22:21:29 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:10.624 22:21:29 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:38:10.624 22:21:29 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:38:10.624 22:21:29 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:10.624 22:21:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:10.624 [2024-07-13 22:21:29.957615] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:10.624 22:21:29 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:10.624 22:21:29 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:38:10.624 22:21:29 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:10.624 22:21:29 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:10.624 22:21:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:10.624 ************************************ 00:38:10.624 START TEST fio_dif_1_default 00:38:10.624 ************************************ 00:38:10.624 22:21:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:38:10.624 22:21:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:38:10.624 22:21:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:38:10.624 22:21:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:38:10.624 22:21:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:38:10.624 22:21:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:38:10.624 22:21:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:10.625 22:21:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:10.625 22:21:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:10.625 bdev_null0 00:38:10.625 22:21:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:10.625 22:21:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:10.625 22:21:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:10.625 22:21:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:10.625 22:21:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:10.625 22:21:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:10.625 22:21:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:10.625 22:21:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:10.625 22:21:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:10.625 22:21:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:10.625 22:21:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:10.625 22:21:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:10.883 [2024-07-13 22:21:30.022032] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:10.883 22:21:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:10.883 22:21:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:38:10.883 22:21:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:38:10.883 22:21:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:10.883 22:21:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:10.883 22:21:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:38:10.883 22:21:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:10.883 22:21:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:38:10.883 22:21:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:38:10.883 22:21:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:10.883 22:21:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:10.884 22:21:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:38:10.884 22:21:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:10.884 22:21:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:10.884 { 00:38:10.884 "params": { 00:38:10.884 "name": "Nvme$subsystem", 00:38:10.884 "trtype": "$TEST_TRANSPORT", 00:38:10.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:10.884 "adrfam": "ipv4", 00:38:10.884 "trsvcid": "$NVMF_PORT", 00:38:10.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:10.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:10.884 "hdgst": ${hdgst:-false}, 00:38:10.884 "ddgst": ${ddgst:-false} 00:38:10.884 }, 00:38:10.884 "method": "bdev_nvme_attach_controller" 00:38:10.884 } 00:38:10.884 EOF 00:38:10.884 )") 00:38:10.884 22:21:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:38:10.884 22:21:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:10.884 22:21:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:10.884 22:21:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:38:10.884 22:21:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:10.884 22:21:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:10.884 22:21:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:38:10.884 22:21:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:10.884 22:21:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:38:10.884 22:21:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:38:10.884 22:21:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:38:10.884 22:21:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:10.884 22:21:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:38:10.884 22:21:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:38:10.884 22:21:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:10.884 "params": { 00:38:10.884 "name": "Nvme0", 00:38:10.884 "trtype": "tcp", 00:38:10.884 "traddr": "10.0.0.2", 00:38:10.884 "adrfam": "ipv4", 00:38:10.884 "trsvcid": "4420", 00:38:10.884 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:10.884 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:10.884 "hdgst": false, 00:38:10.884 "ddgst": false 00:38:10.884 }, 00:38:10.884 "method": "bdev_nvme_attach_controller" 00:38:10.884 }' 00:38:10.884 22:21:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:10.884 22:21:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:10.884 22:21:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # break 00:38:10.884 22:21:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:10.884 22:21:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:11.144 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:11.144 fio-3.35 00:38:11.144 Starting 1 thread 00:38:11.144 EAL: No free 2048 kB hugepages reported on node 1 00:38:23.358 00:38:23.358 filename0: (groupid=0, jobs=1): err= 0: pid=64622: Sat Jul 13 22:21:41 2024 00:38:23.358 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10039msec) 00:38:23.358 slat (nsec): min=5517, max=96570, avg=16676.17, stdev=8761.41 00:38:23.358 clat (usec): min=41746, max=42850, avg=41950.85, stdev=66.19 00:38:23.358 lat (usec): min=41757, max=42872, avg=41967.52, stdev=66.93 00:38:23.358 clat percentiles (usec): 00:38:23.358 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:38:23.358 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:38:23.358 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:23.358 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:38:23.358 | 99.99th=[42730] 00:38:23.358 bw ( KiB/s): min= 352, max= 384, per=99.76%, avg=380.80, stdev= 9.85, samples=20 00:38:23.358 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:38:23.358 lat (msec) : 50=100.00% 00:38:23.358 cpu : usr=90.94%, sys=8.56%, ctx=18, majf=0, minf=1636 00:38:23.358 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:23.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:23.358 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:23.358 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:23.358 00:38:23.358 Run status group 0 (all jobs): 00:38:23.358 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3824KiB (3916kB), run=10039-10039msec 00:38:23.358 ----------------------------------------------------- 00:38:23.358 Suppressions used: 00:38:23.358 count bytes template 00:38:23.358 1 8 /usr/src/fio/parse.c 00:38:23.358 1 8 libtcmalloc_minimal.so 00:38:23.358 1 904 libcrypto.so 00:38:23.358 ----------------------------------------------------- 00:38:23.358 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:23.358 00:38:23.358 real 0m12.365s 00:38:23.358 user 0m11.293s 00:38:23.358 sys 0m1.280s 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:38:23.358 ************************************ 00:38:23.358 END TEST fio_dif_1_default 00:38:23.358 ************************************ 00:38:23.358 22:21:42 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:38:23.358 22:21:42 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:38:23.358 22:21:42 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:23.358 22:21:42 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:23.358 22:21:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:23.358 ************************************ 00:38:23.358 START TEST fio_dif_1_multi_subsystems 00:38:23.358 ************************************ 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:23.358 bdev_null0 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:23.358 [2024-07-13 22:21:42.431185] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:23.358 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:23.359 bdev_null1 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:23.359 { 00:38:23.359 "params": { 00:38:23.359 "name": "Nvme$subsystem", 00:38:23.359 "trtype": "$TEST_TRANSPORT", 00:38:23.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:23.359 "adrfam": "ipv4", 00:38:23.359 "trsvcid": "$NVMF_PORT", 00:38:23.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:23.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:23.359 "hdgst": ${hdgst:-false}, 00:38:23.359 "ddgst": ${ddgst:-false} 00:38:23.359 }, 00:38:23.359 "method": "bdev_nvme_attach_controller" 00:38:23.359 } 00:38:23.359 EOF 00:38:23.359 )") 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:23.359 { 00:38:23.359 "params": { 00:38:23.359 "name": "Nvme$subsystem", 00:38:23.359 "trtype": "$TEST_TRANSPORT", 00:38:23.359 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:23.359 "adrfam": "ipv4", 00:38:23.359 "trsvcid": "$NVMF_PORT", 00:38:23.359 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:23.359 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:23.359 "hdgst": ${hdgst:-false}, 00:38:23.359 "ddgst": ${ddgst:-false} 00:38:23.359 }, 00:38:23.359 "method": "bdev_nvme_attach_controller" 00:38:23.359 } 00:38:23.359 EOF 00:38:23.359 )") 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:23.359 "params": { 00:38:23.359 "name": "Nvme0", 00:38:23.359 "trtype": "tcp", 00:38:23.359 "traddr": "10.0.0.2", 00:38:23.359 "adrfam": "ipv4", 00:38:23.359 "trsvcid": "4420", 00:38:23.359 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:23.359 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:23.359 "hdgst": false, 00:38:23.359 "ddgst": false 00:38:23.359 }, 00:38:23.359 "method": "bdev_nvme_attach_controller" 00:38:23.359 },{ 00:38:23.359 "params": { 00:38:23.359 "name": "Nvme1", 00:38:23.359 "trtype": "tcp", 00:38:23.359 "traddr": "10.0.0.2", 00:38:23.359 "adrfam": "ipv4", 00:38:23.359 "trsvcid": "4420", 00:38:23.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:23.359 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:23.359 "hdgst": false, 00:38:23.359 "ddgst": false 00:38:23.359 }, 00:38:23.359 "method": "bdev_nvme_attach_controller" 00:38:23.359 }' 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # break 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:23.359 22:21:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:23.618 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:23.618 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:38:23.618 fio-3.35 00:38:23.618 Starting 2 threads 00:38:23.618 EAL: No free 2048 kB hugepages reported on node 1 00:38:35.862 00:38:35.862 filename0: (groupid=0, jobs=1): err= 0: pid=66146: Sat Jul 13 22:21:53 2024 00:38:35.862 read: IOPS=185, BW=743KiB/s (760kB/s)(7440KiB/10019msec) 00:38:35.862 slat (nsec): min=5183, max=47042, avg=15461.83, stdev=6359.24 00:38:35.862 clat (usec): min=926, max=42022, avg=21496.85, stdev=20433.43 00:38:35.862 lat (usec): min=937, max=42039, avg=21512.31, stdev=20431.80 00:38:35.862 clat percentiles (usec): 00:38:35.862 | 1.00th=[ 955], 5.00th=[ 971], 10.00th=[ 988], 20.00th=[ 1004], 00:38:35.862 | 30.00th=[ 1020], 40.00th=[ 1057], 50.00th=[41681], 60.00th=[41681], 00:38:35.862 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:38:35.862 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:38:35.862 | 99.99th=[42206] 00:38:35.862 bw ( KiB/s): min= 704, max= 768, per=66.14%, avg=742.40, stdev=32.17, samples=20 00:38:35.862 iops : min= 176, max= 192, avg=185.60, stdev= 8.04, samples=20 00:38:35.862 lat (usec) : 1000=16.77% 00:38:35.862 lat (msec) : 2=33.12%, 50=50.11% 00:38:35.862 cpu : usr=94.23%, sys=5.22%, ctx=16, majf=0, minf=1636 00:38:35.862 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:35.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.862 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.862 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.862 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:35.862 filename1: (groupid=0, jobs=1): err= 0: pid=66147: Sat Jul 13 22:21:53 2024 00:38:35.862 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10040msec) 00:38:35.862 slat (usec): min=6, max=165, avg=15.04, stdev= 6.93 00:38:35.862 clat (usec): min=41804, max=42998, avg=41957.89, stdev=72.37 00:38:35.862 lat (usec): min=41815, max=43032, avg=41972.93, stdev=73.13 00:38:35.862 clat percentiles (usec): 00:38:35.862 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:38:35.862 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:38:35.862 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:38:35.862 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:38:35.862 | 99.99th=[43254] 00:38:35.862 bw ( KiB/s): min= 352, max= 384, per=33.87%, avg=380.80, stdev= 9.85, samples=20 00:38:35.862 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:38:35.862 lat (msec) : 50=100.00% 00:38:35.862 cpu : usr=94.47%, sys=5.03%, ctx=15, majf=0, minf=1637 00:38:35.862 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:35.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.862 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:35.862 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:35.862 latency : target=0, window=0, percentile=100.00%, depth=4 00:38:35.862 00:38:35.862 Run status group 0 (all jobs): 00:38:35.862 READ: bw=1122KiB/s (1149kB/s), 381KiB/s-743KiB/s (390kB/s-760kB/s), io=11.0MiB (11.5MB), run=10019-10040msec 00:38:35.862 ----------------------------------------------------- 00:38:35.862 Suppressions used: 00:38:35.862 count bytes template 00:38:35.862 2 16 /usr/src/fio/parse.c 00:38:35.862 1 8 libtcmalloc_minimal.so 00:38:35.862 1 904 libcrypto.so 00:38:35.862 ----------------------------------------------------- 00:38:35.862 00:38:35.862 22:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:38:35.862 22:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:38:35.862 22:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:35.862 22:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:35.862 22:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:38:35.862 22:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:35.862 22:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:35.862 22:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:35.862 22:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:35.862 22:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:35.862 22:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:35.862 22:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:35.862 22:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:35.862 22:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:38:35.862 22:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:35.862 22:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:38:35.862 22:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:35.862 22:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:35.862 22:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:35.862 22:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:35.862 22:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:35.862 22:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:35.862 22:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:35.862 22:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:35.862 00:38:35.862 real 0m12.599s 00:38:35.862 user 0m21.385s 00:38:35.862 sys 0m1.503s 00:38:35.862 22:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:35.862 22:21:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:38:35.862 ************************************ 00:38:35.862 END TEST fio_dif_1_multi_subsystems 00:38:35.862 ************************************ 00:38:35.862 22:21:55 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:38:35.862 22:21:55 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:38:35.862 22:21:55 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:38:35.862 22:21:55 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:35.862 22:21:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:35.862 ************************************ 00:38:35.862 START TEST fio_dif_rand_params 00:38:35.862 ************************************ 00:38:35.862 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:38:35.862 22:21:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:38:35.862 22:21:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:38:35.862 22:21:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:38:35.862 22:21:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:38:35.862 22:21:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:38:35.862 22:21:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:38:35.862 22:21:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:38:35.862 22:21:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:38:35.862 22:21:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:35.862 22:21:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:35.862 22:21:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:35.862 22:21:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:35.862 22:21:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:35.862 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:35.862 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.862 bdev_null0 00:38:35.862 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:35.862 22:21:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:35.862 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:35.862 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.862 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:35.862 22:21:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:35.862 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:35.862 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.862 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:35.862 22:21:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:35.863 [2024-07-13 22:21:55.070597] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:35.863 { 00:38:35.863 "params": { 00:38:35.863 "name": "Nvme$subsystem", 00:38:35.863 "trtype": "$TEST_TRANSPORT", 00:38:35.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:35.863 "adrfam": "ipv4", 00:38:35.863 "trsvcid": "$NVMF_PORT", 00:38:35.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:35.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:35.863 "hdgst": ${hdgst:-false}, 00:38:35.863 "ddgst": ${ddgst:-false} 00:38:35.863 }, 00:38:35.863 "method": "bdev_nvme_attach_controller" 00:38:35.863 } 00:38:35.863 EOF 00:38:35.863 )") 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:35.863 "params": { 00:38:35.863 "name": "Nvme0", 00:38:35.863 "trtype": "tcp", 00:38:35.863 "traddr": "10.0.0.2", 00:38:35.863 "adrfam": "ipv4", 00:38:35.863 "trsvcid": "4420", 00:38:35.863 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:35.863 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:35.863 "hdgst": false, 00:38:35.863 "ddgst": false 00:38:35.863 }, 00:38:35.863 "method": "bdev_nvme_attach_controller" 00:38:35.863 }' 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:35.863 22:21:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:36.122 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:36.122 ... 00:38:36.122 fio-3.35 00:38:36.122 Starting 3 threads 00:38:36.122 EAL: No free 2048 kB hugepages reported on node 1 00:38:42.679 00:38:42.679 filename0: (groupid=0, jobs=1): err= 0: pid=67664: Sat Jul 13 22:22:01 2024 00:38:42.679 read: IOPS=182, BW=22.8MiB/s (23.9MB/s)(115MiB/5047msec) 00:38:42.679 slat (nsec): min=6511, max=96592, avg=20409.94, stdev=5791.86 00:38:42.679 clat (usec): min=6020, max=58857, avg=16400.21, stdev=13195.56 00:38:42.679 lat (usec): min=6040, max=58870, avg=16420.62, stdev=13195.47 00:38:42.679 clat percentiles (usec): 00:38:42.679 | 1.00th=[ 6456], 5.00th=[ 6980], 10.00th=[ 7635], 20.00th=[ 9503], 00:38:42.679 | 30.00th=[10421], 40.00th=[10945], 50.00th=[12387], 60.00th=[13698], 00:38:42.679 | 70.00th=[14746], 80.00th=[15926], 90.00th=[49021], 95.00th=[53216], 00:38:42.679 | 99.00th=[56361], 99.50th=[57410], 99.90th=[58983], 99.95th=[58983], 00:38:42.679 | 99.99th=[58983] 00:38:42.679 bw ( KiB/s): min=18432, max=29696, per=34.47%, avg=23453.90, stdev=3766.16, samples=10 00:38:42.679 iops : min= 144, max= 232, avg=183.20, stdev=29.44, samples=10 00:38:42.679 lat (msec) : 10=25.24%, 20=63.66%, 50=1.63%, 100=9.47% 00:38:42.679 cpu : usr=93.54%, sys=5.91%, ctx=15, majf=0, minf=1639 00:38:42.679 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:42.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:42.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:42.679 issued rwts: total=919,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:42.679 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:42.679 filename0: (groupid=0, jobs=1): err= 0: pid=67665: Sat Jul 13 22:22:01 2024 00:38:42.679 read: IOPS=162, BW=20.3MiB/s (21.3MB/s)(102MiB/5013msec) 00:38:42.679 slat (nsec): min=9983, max=58043, avg=24826.76, stdev=8222.13 00:38:42.679 clat (usec): min=6312, max=89678, avg=18439.18, stdev=15315.37 00:38:42.679 lat (usec): min=6340, max=89712, avg=18464.01, stdev=15315.29 00:38:42.679 clat percentiles (usec): 00:38:42.679 | 1.00th=[ 7046], 5.00th=[ 7308], 10.00th=[ 8455], 20.00th=[10028], 00:38:42.679 | 30.00th=[10552], 40.00th=[11600], 50.00th=[13173], 60.00th=[13960], 00:38:42.679 | 70.00th=[14877], 80.00th=[16188], 90.00th=[52167], 95.00th=[54789], 00:38:42.679 | 99.00th=[56886], 99.50th=[57934], 99.90th=[89654], 99.95th=[89654], 00:38:42.679 | 99.99th=[89654] 00:38:42.679 bw ( KiB/s): min=13312, max=25600, per=30.51%, avg=20761.60, stdev=4139.24, samples=10 00:38:42.679 iops : min= 104, max= 200, avg=162.20, stdev=32.34, samples=10 00:38:42.679 lat (msec) : 10=19.66%, 20=64.99%, 50=0.74%, 100=14.62% 00:38:42.679 cpu : usr=84.98%, sys=10.34%, ctx=583, majf=0, minf=1634 00:38:42.679 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:42.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:42.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:42.679 issued rwts: total=814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:42.679 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:42.679 filename0: (groupid=0, jobs=1): err= 0: pid=67666: Sat Jul 13 22:22:01 2024 00:38:42.679 read: IOPS=188, BW=23.5MiB/s (24.7MB/s)(119MiB/5046msec) 00:38:42.679 slat (nsec): min=6388, max=53287, avg=20227.26, stdev=5253.52 00:38:42.679 clat (usec): min=6192, max=60373, avg=15859.34, stdev=12615.73 00:38:42.679 lat (usec): min=6211, max=60396, avg=15879.57, stdev=12615.61 00:38:42.679 clat percentiles (usec): 00:38:42.679 | 1.00th=[ 6587], 5.00th=[ 7177], 10.00th=[ 7439], 20.00th=[ 9110], 00:38:42.679 | 30.00th=[10159], 40.00th=[10945], 50.00th=[12256], 60.00th=[13566], 00:38:42.679 | 70.00th=[14615], 80.00th=[15664], 90.00th=[20579], 95.00th=[52691], 00:38:42.679 | 99.00th=[55837], 99.50th=[56886], 99.90th=[60556], 99.95th=[60556], 00:38:42.679 | 99.99th=[60556] 00:38:42.679 bw ( KiB/s): min=19456, max=29184, per=35.63%, avg=24243.20, stdev=3394.19, samples=10 00:38:42.679 iops : min= 152, max= 228, avg=189.40, stdev=26.52, samples=10 00:38:42.679 lat (msec) : 10=27.26%, 20=62.53%, 50=2.00%, 100=8.21% 00:38:42.679 cpu : usr=93.50%, sys=5.91%, ctx=11, majf=0, minf=1635 00:38:42.679 IO depths : 1=3.3%, 2=96.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:42.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:42.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:42.679 issued rwts: total=950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:42.679 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:42.679 00:38:42.679 Run status group 0 (all jobs): 00:38:42.679 READ: bw=66.5MiB/s (69.7MB/s), 20.3MiB/s-23.5MiB/s (21.3MB/s-24.7MB/s), io=335MiB (352MB), run=5013-5047msec 00:38:42.938 ----------------------------------------------------- 00:38:42.938 Suppressions used: 00:38:42.938 count bytes template 00:38:42.938 5 44 /usr/src/fio/parse.c 00:38:42.938 1 8 libtcmalloc_minimal.so 00:38:42.938 1 904 libcrypto.so 00:38:42.938 ----------------------------------------------------- 00:38:42.938 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:42.938 bdev_null0 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:42.938 [2024-07-13 22:22:02.255379] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:42.938 bdev_null1 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:42.938 bdev_null2 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:38:42.938 22:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:38:42.939 22:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:38:42.939 22:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:42.939 22:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:42.939 { 00:38:42.939 "params": { 00:38:42.939 "name": "Nvme$subsystem", 00:38:42.939 "trtype": "$TEST_TRANSPORT", 00:38:42.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:42.939 "adrfam": "ipv4", 00:38:42.939 "trsvcid": "$NVMF_PORT", 00:38:42.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:42.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:42.939 "hdgst": ${hdgst:-false}, 00:38:42.939 "ddgst": ${ddgst:-false} 00:38:42.939 }, 00:38:42.939 "method": "bdev_nvme_attach_controller" 00:38:42.939 } 00:38:42.939 EOF 00:38:42.939 )") 00:38:42.939 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:42.939 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:42.939 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:42.939 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:42.939 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:42.939 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:42.939 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:42.939 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:42.939 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:42.939 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:38:42.939 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:42.939 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:42.939 22:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:42.939 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:42.939 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:38:42.939 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:42.939 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:42.939 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:42.939 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:42.939 22:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:42.939 22:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:42.939 { 00:38:42.939 "params": { 00:38:42.939 "name": "Nvme$subsystem", 00:38:42.939 "trtype": "$TEST_TRANSPORT", 00:38:42.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:42.939 "adrfam": "ipv4", 00:38:42.939 "trsvcid": "$NVMF_PORT", 00:38:42.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:42.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:42.939 "hdgst": ${hdgst:-false}, 00:38:42.939 "ddgst": ${ddgst:-false} 00:38:42.939 }, 00:38:42.939 "method": "bdev_nvme_attach_controller" 00:38:42.939 } 00:38:42.939 EOF 00:38:42.939 )") 00:38:42.939 22:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:42.939 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:42.939 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:43.201 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:43.201 22:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:43.201 22:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:43.201 { 00:38:43.201 "params": { 00:38:43.201 "name": "Nvme$subsystem", 00:38:43.201 "trtype": "$TEST_TRANSPORT", 00:38:43.201 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:43.201 "adrfam": "ipv4", 00:38:43.201 "trsvcid": "$NVMF_PORT", 00:38:43.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:43.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:43.201 "hdgst": ${hdgst:-false}, 00:38:43.201 "ddgst": ${ddgst:-false} 00:38:43.201 }, 00:38:43.201 "method": "bdev_nvme_attach_controller" 00:38:43.201 } 00:38:43.201 EOF 00:38:43.201 )") 00:38:43.201 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:43.201 22:22:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:43.201 22:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:43.201 22:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:38:43.201 22:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:38:43.201 22:22:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:43.201 "params": { 00:38:43.201 "name": "Nvme0", 00:38:43.201 "trtype": "tcp", 00:38:43.201 "traddr": "10.0.0.2", 00:38:43.201 "adrfam": "ipv4", 00:38:43.201 "trsvcid": "4420", 00:38:43.201 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:43.201 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:43.201 "hdgst": false, 00:38:43.201 "ddgst": false 00:38:43.201 }, 00:38:43.201 "method": "bdev_nvme_attach_controller" 00:38:43.201 },{ 00:38:43.201 "params": { 00:38:43.201 "name": "Nvme1", 00:38:43.201 "trtype": "tcp", 00:38:43.201 "traddr": "10.0.0.2", 00:38:43.201 "adrfam": "ipv4", 00:38:43.201 "trsvcid": "4420", 00:38:43.201 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:43.201 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:43.201 "hdgst": false, 00:38:43.201 "ddgst": false 00:38:43.201 }, 00:38:43.201 "method": "bdev_nvme_attach_controller" 00:38:43.201 },{ 00:38:43.201 "params": { 00:38:43.201 "name": "Nvme2", 00:38:43.201 "trtype": "tcp", 00:38:43.201 "traddr": "10.0.0.2", 00:38:43.201 "adrfam": "ipv4", 00:38:43.201 "trsvcid": "4420", 00:38:43.201 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:38:43.201 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:38:43.201 "hdgst": false, 00:38:43.201 "ddgst": false 00:38:43.201 }, 00:38:43.201 "method": "bdev_nvme_attach_controller" 00:38:43.201 }' 00:38:43.201 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:43.201 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:43.201 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:38:43.201 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:43.201 22:22:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:43.461 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:43.461 ... 00:38:43.461 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:43.461 ... 00:38:43.461 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:38:43.461 ... 00:38:43.461 fio-3.35 00:38:43.461 Starting 24 threads 00:38:43.461 EAL: No free 2048 kB hugepages reported on node 1 00:38:55.675 00:38:55.675 filename0: (groupid=0, jobs=1): err= 0: pid=68648: Sat Jul 13 22:22:14 2024 00:38:55.675 read: IOPS=365, BW=1462KiB/s (1497kB/s)(14.3MiB/10011msec) 00:38:55.675 slat (nsec): min=7751, max=97903, avg=31956.39, stdev=12456.32 00:38:55.675 clat (msec): min=15, max=101, avg=43.49, stdev= 6.28 00:38:55.675 lat (msec): min=15, max=101, avg=43.52, stdev= 6.28 00:38:55.675 clat percentiles (msec): 00:38:55.675 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 43], 20.00th=[ 44], 00:38:55.675 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:38:55.675 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 46], 95.00th=[ 46], 00:38:55.675 | 99.00th=[ 71], 99.50th=[ 74], 99.90th=[ 102], 99.95th=[ 102], 00:38:55.675 | 99.99th=[ 102] 00:38:55.675 bw ( KiB/s): min= 1282, max= 1584, per=4.21%, avg=1457.79, stdev=77.20, samples=19 00:38:55.675 iops : min= 320, max= 396, avg=364.42, stdev=19.36, samples=19 00:38:55.675 lat (msec) : 20=0.27%, 50=97.05%, 100=2.24%, 250=0.44% 00:38:55.675 cpu : usr=94.81%, sys=2.74%, ctx=75, majf=0, minf=1635 00:38:55.675 IO depths : 1=4.0%, 2=8.4%, 4=18.2%, 8=59.8%, 16=9.6%, 32=0.0%, >=64=0.0% 00:38:55.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.675 complete : 0=0.0%, 4=92.5%, 8=2.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.675 issued rwts: total=3660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:55.675 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:55.675 filename0: (groupid=0, jobs=1): err= 0: pid=68649: Sat Jul 13 22:22:14 2024 00:38:55.675 read: IOPS=363, BW=1455KiB/s (1490kB/s)(14.2MiB/10026msec) 00:38:55.675 slat (nsec): min=4594, max=78670, avg=23586.78, stdev=11394.19 00:38:55.675 clat (usec): min=12040, max=49923, avg=43759.25, stdev=2804.60 00:38:55.675 lat (usec): min=12092, max=49940, avg=43782.84, stdev=2804.22 00:38:55.675 clat percentiles (usec): 00:38:55.675 | 1.00th=[31589], 5.00th=[42730], 10.00th=[43254], 20.00th=[43779], 00:38:55.675 | 30.00th=[43779], 40.00th=[43779], 50.00th=[43779], 60.00th=[44303], 00:38:55.675 | 70.00th=[44303], 80.00th=[44827], 90.00th=[44827], 95.00th=[45351], 00:38:55.675 | 99.00th=[46400], 99.50th=[46924], 99.90th=[49021], 99.95th=[50070], 00:38:55.675 | 99.99th=[50070] 00:38:55.675 bw ( KiB/s): min= 1405, max= 1664, per=4.20%, avg=1452.65, stdev=75.25, samples=20 00:38:55.675 iops : min= 351, max= 416, avg=363.15, stdev=18.82, samples=20 00:38:55.675 lat (msec) : 20=0.44%, 50=99.56% 00:38:55.675 cpu : usr=97.44%, sys=1.89%, ctx=93, majf=0, minf=1635 00:38:55.675 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:38:55.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.675 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.675 issued rwts: total=3648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:55.675 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:55.675 filename0: (groupid=0, jobs=1): err= 0: pid=68650: Sat Jul 13 22:22:14 2024 00:38:55.675 read: IOPS=359, BW=1439KiB/s (1473kB/s)(14.1MiB/10009msec) 00:38:55.675 slat (nsec): min=5751, max=98273, avg=39942.87, stdev=12427.04 00:38:55.675 clat (usec): min=23381, max=95631, avg=44109.18, stdev=3938.54 00:38:55.675 lat (usec): min=23407, max=95652, avg=44149.12, stdev=3937.37 00:38:55.675 clat percentiles (usec): 00:38:55.675 | 1.00th=[41157], 5.00th=[42730], 10.00th=[43254], 20.00th=[43254], 00:38:55.675 | 30.00th=[43779], 40.00th=[43779], 50.00th=[43779], 60.00th=[43779], 00:38:55.675 | 70.00th=[44303], 80.00th=[44303], 90.00th=[44827], 95.00th=[45351], 00:38:55.675 | 99.00th=[47449], 99.50th=[65274], 99.90th=[95945], 99.95th=[95945], 00:38:55.675 | 99.99th=[95945] 00:38:55.675 bw ( KiB/s): min= 1280, max= 1536, per=4.15%, avg=1434.95, stdev=68.52, samples=19 00:38:55.675 iops : min= 320, max= 384, avg=358.74, stdev=17.13, samples=19 00:38:55.675 lat (msec) : 50=99.31%, 100=0.69% 00:38:55.675 cpu : usr=96.87%, sys=1.85%, ctx=134, majf=0, minf=1636 00:38:55.675 IO depths : 1=5.9%, 2=12.1%, 4=24.9%, 8=50.5%, 16=6.6%, 32=0.0%, >=64=0.0% 00:38:55.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.675 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.675 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:55.675 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:55.675 filename0: (groupid=0, jobs=1): err= 0: pid=68651: Sat Jul 13 22:22:14 2024 00:38:55.675 read: IOPS=361, BW=1444KiB/s (1479kB/s)(14.1MiB/10014msec) 00:38:55.675 slat (nsec): min=4999, max=90541, avg=42319.65, stdev=11512.56 00:38:55.675 clat (usec): min=21705, max=69203, avg=43924.73, stdev=2014.33 00:38:55.675 lat (usec): min=21718, max=69225, avg=43967.05, stdev=2013.77 00:38:55.675 clat percentiles (usec): 00:38:55.675 | 1.00th=[42206], 5.00th=[42730], 10.00th=[42730], 20.00th=[43254], 00:38:55.675 | 30.00th=[43779], 40.00th=[43779], 50.00th=[43779], 60.00th=[43779], 00:38:55.675 | 70.00th=[44303], 80.00th=[44303], 90.00th=[44827], 95.00th=[45351], 00:38:55.675 | 99.00th=[46400], 99.50th=[57410], 99.90th=[68682], 99.95th=[68682], 00:38:55.675 | 99.99th=[68682] 00:38:55.675 bw ( KiB/s): min= 1408, max= 1536, per=4.17%, avg=1441.68, stdev=57.91, samples=19 00:38:55.675 iops : min= 352, max= 384, avg=360.42, stdev=14.48, samples=19 00:38:55.675 lat (msec) : 50=99.45%, 100=0.55% 00:38:55.675 cpu : usr=91.12%, sys=4.26%, ctx=274, majf=0, minf=1634 00:38:55.675 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:55.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.675 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.675 issued rwts: total=3616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:55.675 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:55.675 filename0: (groupid=0, jobs=1): err= 0: pid=68652: Sat Jul 13 22:22:14 2024 00:38:55.675 read: IOPS=359, BW=1437KiB/s (1472kB/s)(14.1MiB/10018msec) 00:38:55.675 slat (nsec): min=4885, max=97400, avg=41363.31, stdev=12452.61 00:38:55.675 clat (usec): min=27518, max=73171, avg=44146.68, stdev=3131.38 00:38:55.675 lat (usec): min=27544, max=73192, avg=44188.04, stdev=3130.55 00:38:55.675 clat percentiles (usec): 00:38:55.675 | 1.00th=[42206], 5.00th=[42730], 10.00th=[43254], 20.00th=[43254], 00:38:55.675 | 30.00th=[43779], 40.00th=[43779], 50.00th=[43779], 60.00th=[43779], 00:38:55.675 | 70.00th=[44303], 80.00th=[44303], 90.00th=[44827], 95.00th=[45351], 00:38:55.675 | 99.00th=[63177], 99.50th=[69731], 99.90th=[71828], 99.95th=[72877], 00:38:55.675 | 99.99th=[72877] 00:38:55.675 bw ( KiB/s): min= 1280, max= 1536, per=4.15%, avg=1434.95, stdev=68.73, samples=19 00:38:55.675 iops : min= 320, max= 384, avg=358.74, stdev=17.18, samples=19 00:38:55.675 lat (msec) : 50=98.33%, 100=1.67% 00:38:55.675 cpu : usr=94.60%, sys=3.00%, ctx=320, majf=0, minf=1636 00:38:55.675 IO depths : 1=5.7%, 2=11.8%, 4=24.8%, 8=50.9%, 16=6.8%, 32=0.0%, >=64=0.0% 00:38:55.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.675 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.675 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:55.675 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:55.675 filename0: (groupid=0, jobs=1): err= 0: pid=68653: Sat Jul 13 22:22:14 2024 00:38:55.675 read: IOPS=359, BW=1436KiB/s (1471kB/s)(14.0MiB/10015msec) 00:38:55.675 slat (nsec): min=7695, max=76379, avg=35412.39, stdev=11738.83 00:38:55.675 clat (usec): min=23538, max=97967, avg=44247.95, stdev=5599.09 00:38:55.675 lat (usec): min=23551, max=97989, avg=44283.36, stdev=5598.87 00:38:55.675 clat percentiles (usec): 00:38:55.675 | 1.00th=[27132], 5.00th=[41681], 10.00th=[42730], 20.00th=[43254], 00:38:55.675 | 30.00th=[43779], 40.00th=[43779], 50.00th=[43779], 60.00th=[44303], 00:38:55.675 | 70.00th=[44303], 80.00th=[44827], 90.00th=[45351], 95.00th=[47449], 00:38:55.675 | 99.00th=[70779], 99.50th=[72877], 99.90th=[98042], 99.95th=[98042], 00:38:55.675 | 99.99th=[98042] 00:38:55.675 bw ( KiB/s): min= 1280, max= 1536, per=4.14%, avg=1432.42, stdev=69.00, samples=19 00:38:55.675 iops : min= 320, max= 384, avg=358.11, stdev=17.25, samples=19 00:38:55.675 lat (msec) : 50=96.44%, 100=3.56% 00:38:55.675 cpu : usr=96.96%, sys=2.04%, ctx=181, majf=0, minf=1634 00:38:55.675 IO depths : 1=3.8%, 2=9.0%, 4=21.1%, 8=57.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:38:55.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.675 complete : 0=0.0%, 4=93.2%, 8=1.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.675 issued rwts: total=3596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:55.676 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:55.676 filename0: (groupid=0, jobs=1): err= 0: pid=68654: Sat Jul 13 22:22:14 2024 00:38:55.676 read: IOPS=360, BW=1443KiB/s (1477kB/s)(14.1MiB/10021msec) 00:38:55.676 slat (nsec): min=6392, max=86970, avg=31677.84, stdev=10477.86 00:38:55.676 clat (usec): min=21568, max=69235, avg=44099.83, stdev=2936.37 00:38:55.676 lat (usec): min=21594, max=69260, avg=44131.51, stdev=2935.65 00:38:55.676 clat percentiles (usec): 00:38:55.676 | 1.00th=[30278], 5.00th=[42730], 10.00th=[43254], 20.00th=[43779], 00:38:55.676 | 30.00th=[43779], 40.00th=[43779], 50.00th=[43779], 60.00th=[44303], 00:38:55.676 | 70.00th=[44303], 80.00th=[44827], 90.00th=[44827], 95.00th=[45351], 00:38:55.676 | 99.00th=[57934], 99.50th=[66323], 99.90th=[69731], 99.95th=[69731], 00:38:55.676 | 99.99th=[69731] 00:38:55.676 bw ( KiB/s): min= 1408, max= 1536, per=4.16%, avg=1440.00, stdev=55.18, samples=20 00:38:55.676 iops : min= 352, max= 384, avg=360.00, stdev=13.80, samples=20 00:38:55.676 lat (msec) : 50=98.73%, 100=1.27% 00:38:55.676 cpu : usr=91.39%, sys=4.42%, ctx=463, majf=0, minf=1636 00:38:55.676 IO depths : 1=5.2%, 2=11.5%, 4=25.0%, 8=51.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:38:55.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.676 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.676 issued rwts: total=3614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:55.676 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:55.676 filename0: (groupid=0, jobs=1): err= 0: pid=68655: Sat Jul 13 22:22:14 2024 00:38:55.676 read: IOPS=363, BW=1456KiB/s (1490kB/s)(14.2MiB/10025msec) 00:38:55.676 slat (usec): min=4, max=133, avg=26.60, stdev=10.09 00:38:55.676 clat (usec): min=11855, max=48733, avg=43746.31, stdev=2906.56 00:38:55.676 lat (usec): min=11886, max=48759, avg=43772.90, stdev=2906.66 00:38:55.676 clat percentiles (usec): 00:38:55.676 | 1.00th=[31851], 5.00th=[42730], 10.00th=[43254], 20.00th=[43254], 00:38:55.676 | 30.00th=[43779], 40.00th=[43779], 50.00th=[43779], 60.00th=[44303], 00:38:55.676 | 70.00th=[44303], 80.00th=[44827], 90.00th=[44827], 95.00th=[45876], 00:38:55.676 | 99.00th=[47449], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:38:55.676 | 99.99th=[48497] 00:38:55.676 bw ( KiB/s): min= 1408, max= 1664, per=4.20%, avg=1452.90, stdev=75.09, samples=20 00:38:55.676 iops : min= 352, max= 416, avg=363.20, stdev=18.79, samples=20 00:38:55.676 lat (msec) : 20=0.44%, 50=99.56% 00:38:55.676 cpu : usr=97.01%, sys=1.83%, ctx=157, majf=0, minf=1637 00:38:55.676 IO depths : 1=5.3%, 2=11.3%, 4=24.1%, 8=52.1%, 16=7.2%, 32=0.0%, >=64=0.0% 00:38:55.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.676 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.676 issued rwts: total=3648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:55.676 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:55.676 filename1: (groupid=0, jobs=1): err= 0: pid=68656: Sat Jul 13 22:22:14 2024 00:38:55.676 read: IOPS=360, BW=1444KiB/s (1478kB/s)(14.1MiB/10018msec) 00:38:55.676 slat (usec): min=4, max=163, avg=38.65, stdev=11.07 00:38:55.676 clat (usec): min=20417, max=65723, avg=43998.35, stdev=2217.49 00:38:55.676 lat (usec): min=20580, max=65747, avg=44037.00, stdev=2215.83 00:38:55.676 clat percentiles (usec): 00:38:55.676 | 1.00th=[41157], 5.00th=[42730], 10.00th=[43254], 20.00th=[43254], 00:38:55.676 | 30.00th=[43779], 40.00th=[43779], 50.00th=[43779], 60.00th=[44303], 00:38:55.676 | 70.00th=[44303], 80.00th=[44303], 90.00th=[44827], 95.00th=[45351], 00:38:55.676 | 99.00th=[46924], 99.50th=[58983], 99.90th=[65799], 99.95th=[65799], 00:38:55.676 | 99.99th=[65799] 00:38:55.676 bw ( KiB/s): min= 1408, max= 1536, per=4.16%, avg=1440.10, stdev=56.81, samples=20 00:38:55.676 iops : min= 352, max= 384, avg=360.00, stdev=14.22, samples=20 00:38:55.676 lat (msec) : 50=99.34%, 100=0.66% 00:38:55.676 cpu : usr=93.82%, sys=3.34%, ctx=165, majf=0, minf=1636 00:38:55.676 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:38:55.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.676 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.676 issued rwts: total=3616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:55.676 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:55.676 filename1: (groupid=0, jobs=1): err= 0: pid=68657: Sat Jul 13 22:22:14 2024 00:38:55.676 read: IOPS=363, BW=1456KiB/s (1491kB/s)(14.2MiB/10024msec) 00:38:55.676 slat (nsec): min=7080, max=97519, avg=35446.93, stdev=10878.27 00:38:55.676 clat (usec): min=11838, max=48428, avg=43649.96, stdev=2871.99 00:38:55.676 lat (usec): min=11876, max=48457, avg=43685.40, stdev=2871.50 00:38:55.676 clat percentiles (usec): 00:38:55.676 | 1.00th=[31327], 5.00th=[42730], 10.00th=[43254], 20.00th=[43254], 00:38:55.676 | 30.00th=[43779], 40.00th=[43779], 50.00th=[43779], 60.00th=[44303], 00:38:55.676 | 70.00th=[44303], 80.00th=[44303], 90.00th=[44827], 95.00th=[45351], 00:38:55.676 | 99.00th=[46400], 99.50th=[46400], 99.90th=[47973], 99.95th=[48497], 00:38:55.676 | 99.99th=[48497] 00:38:55.676 bw ( KiB/s): min= 1408, max= 1664, per=4.20%, avg=1452.90, stdev=75.09, samples=20 00:38:55.676 iops : min= 352, max= 416, avg=363.20, stdev=18.79, samples=20 00:38:55.676 lat (msec) : 20=0.44%, 50=99.56% 00:38:55.676 cpu : usr=91.22%, sys=4.01%, ctx=167, majf=0, minf=1637 00:38:55.676 IO depths : 1=5.7%, 2=12.0%, 4=24.9%, 8=50.7%, 16=6.8%, 32=0.0%, >=64=0.0% 00:38:55.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.676 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.676 issued rwts: total=3648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:55.676 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:55.676 filename1: (groupid=0, jobs=1): err= 0: pid=68658: Sat Jul 13 22:22:14 2024 00:38:55.676 read: IOPS=360, BW=1442KiB/s (1477kB/s)(14.1MiB/10020msec) 00:38:55.676 slat (usec): min=6, max=123, avg=41.14, stdev=13.44 00:38:55.676 clat (usec): min=17496, max=78601, avg=44015.92, stdev=4832.43 00:38:55.676 lat (usec): min=17518, max=78629, avg=44057.06, stdev=4831.90 00:38:55.676 clat percentiles (usec): 00:38:55.676 | 1.00th=[26608], 5.00th=[42206], 10.00th=[42730], 20.00th=[43254], 00:38:55.676 | 30.00th=[43779], 40.00th=[43779], 50.00th=[43779], 60.00th=[44303], 00:38:55.676 | 70.00th=[44303], 80.00th=[44303], 90.00th=[45351], 95.00th=[45876], 00:38:55.676 | 99.00th=[67634], 99.50th=[67634], 99.90th=[77071], 99.95th=[78119], 00:38:55.676 | 99.99th=[78119] 00:38:55.676 bw ( KiB/s): min= 1392, max= 1536, per=4.16%, avg=1440.00, stdev=53.70, samples=20 00:38:55.676 iops : min= 348, max= 384, avg=360.00, stdev=13.42, samples=20 00:38:55.676 lat (msec) : 20=0.11%, 50=96.71%, 100=3.18% 00:38:55.676 cpu : usr=91.94%, sys=3.84%, ctx=284, majf=0, minf=1637 00:38:55.676 IO depths : 1=3.9%, 2=9.9%, 4=24.4%, 8=53.2%, 16=8.6%, 32=0.0%, >=64=0.0% 00:38:55.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.676 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.676 issued rwts: total=3613,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:55.676 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:55.676 filename1: (groupid=0, jobs=1): err= 0: pid=68659: Sat Jul 13 22:22:14 2024 00:38:55.676 read: IOPS=359, BW=1436KiB/s (1471kB/s)(14.0MiB/10008msec) 00:38:55.676 slat (usec): min=11, max=351, avg=59.93, stdev=16.87 00:38:55.676 clat (usec): min=22589, max=92020, avg=44024.48, stdev=4877.65 00:38:55.676 lat (usec): min=22648, max=92054, avg=44084.41, stdev=4877.00 00:38:55.676 clat percentiles (usec): 00:38:55.676 | 1.00th=[30016], 5.00th=[42206], 10.00th=[42730], 20.00th=[43254], 00:38:55.676 | 30.00th=[43254], 40.00th=[43779], 50.00th=[43779], 60.00th=[43779], 00:38:55.676 | 70.00th=[44303], 80.00th=[44303], 90.00th=[44827], 95.00th=[45876], 00:38:55.676 | 99.00th=[72877], 99.50th=[78119], 99.90th=[91751], 99.95th=[91751], 00:38:55.676 | 99.99th=[91751] 00:38:55.676 bw ( KiB/s): min= 1280, max= 1536, per=4.14%, avg=1431.42, stdev=70.49, samples=19 00:38:55.676 iops : min= 320, max= 384, avg=357.84, stdev=17.63, samples=19 00:38:55.676 lat (msec) : 50=97.94%, 100=2.06% 00:38:55.676 cpu : usr=98.03%, sys=1.44%, ctx=13, majf=0, minf=1636 00:38:55.676 IO depths : 1=5.3%, 2=11.1%, 4=23.4%, 8=52.8%, 16=7.4%, 32=0.0%, >=64=0.0% 00:38:55.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.676 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.676 issued rwts: total=3594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:55.676 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:55.676 filename1: (groupid=0, jobs=1): err= 0: pid=68660: Sat Jul 13 22:22:14 2024 00:38:55.676 read: IOPS=359, BW=1438KiB/s (1473kB/s)(14.1MiB/10011msec) 00:38:55.676 slat (nsec): min=5680, max=67416, avg=33591.24, stdev=9074.99 00:38:55.676 clat (usec): min=29537, max=97800, avg=44192.55, stdev=3770.47 00:38:55.676 lat (usec): min=29585, max=97828, avg=44226.14, stdev=3769.22 00:38:55.676 clat percentiles (usec): 00:38:55.676 | 1.00th=[42206], 5.00th=[42730], 10.00th=[43254], 20.00th=[43254], 00:38:55.676 | 30.00th=[43779], 40.00th=[43779], 50.00th=[43779], 60.00th=[44303], 00:38:55.676 | 70.00th=[44303], 80.00th=[44303], 90.00th=[44827], 95.00th=[45351], 00:38:55.676 | 99.00th=[46400], 99.50th=[47973], 99.90th=[98042], 99.95th=[98042], 00:38:55.676 | 99.99th=[98042] 00:38:55.676 bw ( KiB/s): min= 1280, max= 1536, per=4.15%, avg=1434.95, stdev=68.52, samples=19 00:38:55.676 iops : min= 320, max= 384, avg=358.74, stdev=17.13, samples=19 00:38:55.676 lat (msec) : 50=99.56%, 100=0.44% 00:38:55.676 cpu : usr=98.21%, sys=1.34%, ctx=13, majf=0, minf=1636 00:38:55.676 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:55.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.676 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.676 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:55.676 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:55.676 filename1: (groupid=0, jobs=1): err= 0: pid=68661: Sat Jul 13 22:22:14 2024 00:38:55.676 read: IOPS=360, BW=1443KiB/s (1478kB/s)(14.1MiB/10022msec) 00:38:55.676 slat (nsec): min=4954, max=78615, avg=40816.35, stdev=10736.81 00:38:55.676 clat (usec): min=22359, max=69319, avg=43969.03, stdev=2678.44 00:38:55.676 lat (usec): min=22376, max=69347, avg=44009.84, stdev=2678.17 00:38:55.676 clat percentiles (usec): 00:38:55.676 | 1.00th=[40633], 5.00th=[42730], 10.00th=[42730], 20.00th=[43254], 00:38:55.676 | 30.00th=[43779], 40.00th=[43779], 50.00th=[43779], 60.00th=[43779], 00:38:55.676 | 70.00th=[44303], 80.00th=[44303], 90.00th=[44827], 95.00th=[45351], 00:38:55.676 | 99.00th=[46924], 99.50th=[63701], 99.90th=[69731], 99.95th=[69731], 00:38:55.676 | 99.99th=[69731] 00:38:55.676 bw ( KiB/s): min= 1280, max= 1536, per=4.17%, avg=1441.68, stdev=71.93, samples=19 00:38:55.676 iops : min= 320, max= 384, avg=360.42, stdev=17.98, samples=19 00:38:55.676 lat (msec) : 50=99.06%, 100=0.94% 00:38:55.676 cpu : usr=97.95%, sys=1.52%, ctx=17, majf=0, minf=1634 00:38:55.676 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:38:55.676 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.676 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.676 issued rwts: total=3616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:55.676 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:55.676 filename1: (groupid=0, jobs=1): err= 0: pid=68662: Sat Jul 13 22:22:14 2024 00:38:55.676 read: IOPS=363, BW=1455KiB/s (1490kB/s)(14.2MiB/10031msec) 00:38:55.676 slat (nsec): min=5351, max=78834, avg=17289.15, stdev=6834.29 00:38:55.677 clat (usec): min=9131, max=71832, avg=43778.09, stdev=5002.37 00:38:55.677 lat (usec): min=9149, max=71845, avg=43795.38, stdev=5001.41 00:38:55.677 clat percentiles (usec): 00:38:55.677 | 1.00th=[19792], 5.00th=[42730], 10.00th=[43254], 20.00th=[43779], 00:38:55.677 | 30.00th=[43779], 40.00th=[43779], 50.00th=[44303], 60.00th=[44303], 00:38:55.677 | 70.00th=[44303], 80.00th=[44827], 90.00th=[45351], 95.00th=[45876], 00:38:55.677 | 99.00th=[64750], 99.50th=[65799], 99.90th=[69731], 99.95th=[71828], 00:38:55.677 | 99.99th=[71828] 00:38:55.677 bw ( KiB/s): min= 1408, max= 1664, per=4.20%, avg=1452.80, stdev=75.15, samples=20 00:38:55.677 iops : min= 352, max= 416, avg=363.20, stdev=18.79, samples=20 00:38:55.677 lat (msec) : 10=0.19%, 20=0.85%, 50=97.34%, 100=1.62% 00:38:55.677 cpu : usr=98.11%, sys=1.42%, ctx=15, majf=0, minf=1636 00:38:55.677 IO depths : 1=4.7%, 2=10.8%, 4=24.6%, 8=52.1%, 16=7.8%, 32=0.0%, >=64=0.0% 00:38:55.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.677 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.677 issued rwts: total=3648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:55.677 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:55.677 filename1: (groupid=0, jobs=1): err= 0: pid=68663: Sat Jul 13 22:22:14 2024 00:38:55.677 read: IOPS=358, BW=1436KiB/s (1470kB/s)(14.1MiB/10029msec) 00:38:55.677 slat (nsec): min=6178, max=64129, avg=18935.62, stdev=7411.98 00:38:55.677 clat (msec): min=22, max=121, avg=44.41, stdev= 4.53 00:38:55.677 lat (msec): min=22, max=121, avg=44.43, stdev= 4.53 00:38:55.677 clat percentiles (msec): 00:38:55.677 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 44], 00:38:55.677 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 45], 00:38:55.677 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 45], 95.00th=[ 46], 00:38:55.677 | 99.00th=[ 47], 99.50th=[ 63], 99.90th=[ 107], 99.95th=[ 122], 00:38:55.677 | 99.99th=[ 122] 00:38:55.677 bw ( KiB/s): min= 1280, max= 1536, per=4.14%, avg=1433.70, stdev=66.92, samples=20 00:38:55.677 iops : min= 320, max= 384, avg=358.40, stdev=16.74, samples=20 00:38:55.677 lat (msec) : 50=99.33%, 100=0.22%, 250=0.44% 00:38:55.677 cpu : usr=98.08%, sys=1.46%, ctx=14, majf=0, minf=1635 00:38:55.677 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:55.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.677 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.677 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:55.677 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:55.677 filename2: (groupid=0, jobs=1): err= 0: pid=68664: Sat Jul 13 22:22:14 2024 00:38:55.677 read: IOPS=361, BW=1444KiB/s (1479kB/s)(14.1MiB/10016msec) 00:38:55.677 slat (nsec): min=5496, max=94957, avg=40325.32, stdev=11211.01 00:38:55.677 clat (usec): min=28710, max=63933, avg=43970.11, stdev=1824.33 00:38:55.677 lat (usec): min=28742, max=63956, avg=44010.44, stdev=1823.00 00:38:55.677 clat percentiles (usec): 00:38:55.677 | 1.00th=[42206], 5.00th=[42730], 10.00th=[43254], 20.00th=[43254], 00:38:55.677 | 30.00th=[43779], 40.00th=[43779], 50.00th=[43779], 60.00th=[44303], 00:38:55.677 | 70.00th=[44303], 80.00th=[44303], 90.00th=[44827], 95.00th=[45351], 00:38:55.677 | 99.00th=[46400], 99.50th=[46924], 99.90th=[63701], 99.95th=[63701], 00:38:55.677 | 99.99th=[63701] 00:38:55.677 bw ( KiB/s): min= 1408, max= 1536, per=4.16%, avg=1440.00, stdev=56.87, samples=20 00:38:55.677 iops : min= 352, max= 384, avg=360.00, stdev=14.22, samples=20 00:38:55.677 lat (msec) : 50=99.56%, 100=0.44% 00:38:55.677 cpu : usr=97.87%, sys=1.64%, ctx=15, majf=0, minf=1636 00:38:55.677 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:55.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.677 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.677 issued rwts: total=3616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:55.677 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:55.677 filename2: (groupid=0, jobs=1): err= 0: pid=68665: Sat Jul 13 22:22:14 2024 00:38:55.677 read: IOPS=361, BW=1444KiB/s (1479kB/s)(14.1MiB/10033msec) 00:38:55.677 slat (nsec): min=5465, max=90322, avg=38353.33, stdev=12701.62 00:38:55.677 clat (usec): min=24082, max=71198, avg=43942.35, stdev=3197.26 00:38:55.677 lat (usec): min=24101, max=71256, avg=43980.70, stdev=3197.47 00:38:55.677 clat percentiles (usec): 00:38:55.677 | 1.00th=[28967], 5.00th=[42730], 10.00th=[42730], 20.00th=[43254], 00:38:55.677 | 30.00th=[43779], 40.00th=[43779], 50.00th=[43779], 60.00th=[44303], 00:38:55.677 | 70.00th=[44303], 80.00th=[44303], 90.00th=[44827], 95.00th=[45351], 00:38:55.677 | 99.00th=[52167], 99.50th=[68682], 99.90th=[70779], 99.95th=[70779], 00:38:55.677 | 99.99th=[70779] 00:38:55.677 bw ( KiB/s): min= 1408, max= 1536, per=4.17%, avg=1442.40, stdev=56.46, samples=20 00:38:55.677 iops : min= 352, max= 384, avg=360.60, stdev=14.11, samples=20 00:38:55.677 lat (msec) : 50=98.84%, 100=1.16% 00:38:55.677 cpu : usr=97.63%, sys=1.75%, ctx=14, majf=0, minf=1634 00:38:55.677 IO depths : 1=5.5%, 2=11.3%, 4=23.1%, 8=52.8%, 16=7.4%, 32=0.0%, >=64=0.0% 00:38:55.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.677 complete : 0=0.0%, 4=93.7%, 8=0.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.677 issued rwts: total=3622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:55.677 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:55.677 filename2: (groupid=0, jobs=1): err= 0: pid=68666: Sat Jul 13 22:22:14 2024 00:38:55.677 read: IOPS=359, BW=1439KiB/s (1473kB/s)(14.1MiB/10008msec) 00:38:55.677 slat (nsec): min=5836, max=73619, avg=34958.83, stdev=9287.32 00:38:55.677 clat (usec): min=29571, max=95323, avg=44163.70, stdev=3665.69 00:38:55.677 lat (usec): min=29599, max=95344, avg=44198.66, stdev=3664.09 00:38:55.677 clat percentiles (usec): 00:38:55.677 | 1.00th=[41681], 5.00th=[42730], 10.00th=[43254], 20.00th=[43254], 00:38:55.677 | 30.00th=[43779], 40.00th=[43779], 50.00th=[43779], 60.00th=[44303], 00:38:55.677 | 70.00th=[44303], 80.00th=[44303], 90.00th=[44827], 95.00th=[45351], 00:38:55.677 | 99.00th=[46924], 99.50th=[49546], 99.90th=[94897], 99.95th=[94897], 00:38:55.677 | 99.99th=[94897] 00:38:55.677 bw ( KiB/s): min= 1282, max= 1536, per=4.15%, avg=1435.05, stdev=68.27, samples=19 00:38:55.677 iops : min= 320, max= 384, avg=358.74, stdev=17.13, samples=19 00:38:55.677 lat (msec) : 50=99.50%, 100=0.50% 00:38:55.677 cpu : usr=98.05%, sys=1.47%, ctx=26, majf=0, minf=1634 00:38:55.677 IO depths : 1=6.1%, 2=12.2%, 4=24.8%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:38:55.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.677 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.677 issued rwts: total=3600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:55.677 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:55.677 filename2: (groupid=0, jobs=1): err= 0: pid=68667: Sat Jul 13 22:22:14 2024 00:38:55.677 read: IOPS=352, BW=1410KiB/s (1444kB/s)(13.8MiB/10007msec) 00:38:55.677 slat (nsec): min=11589, max=85220, avg=28395.15, stdev=12523.43 00:38:55.677 clat (msec): min=19, max=131, avg=45.18, stdev= 8.00 00:38:55.677 lat (msec): min=19, max=131, avg=45.21, stdev= 8.00 00:38:55.677 clat percentiles (msec): 00:38:55.677 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 43], 20.00th=[ 44], 00:38:55.677 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 45], 60.00th=[ 45], 00:38:55.677 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 51], 95.00th=[ 61], 00:38:55.677 | 99.00th=[ 69], 99.50th=[ 74], 99.90th=[ 111], 99.95th=[ 131], 00:38:55.677 | 99.99th=[ 132] 00:38:55.677 bw ( KiB/s): min= 1154, max= 1520, per=4.07%, avg=1406.53, stdev=82.68, samples=19 00:38:55.677 iops : min= 288, max= 380, avg=351.58, stdev=20.74, samples=19 00:38:55.677 lat (msec) : 20=0.03%, 50=89.97%, 100=9.55%, 250=0.45% 00:38:55.677 cpu : usr=97.91%, sys=1.59%, ctx=13, majf=0, minf=1636 00:38:55.677 IO depths : 1=0.1%, 2=3.6%, 4=15.0%, 8=67.3%, 16=13.9%, 32=0.0%, >=64=0.0% 00:38:55.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.677 complete : 0=0.0%, 4=92.1%, 8=3.8%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.677 issued rwts: total=3528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:55.677 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:55.677 filename2: (groupid=0, jobs=1): err= 0: pid=68668: Sat Jul 13 22:22:14 2024 00:38:55.677 read: IOPS=357, BW=1428KiB/s (1463kB/s)(14.0MiB/10008msec) 00:38:55.677 slat (nsec): min=10638, max=92186, avg=34876.47, stdev=9675.70 00:38:55.677 clat (msec): min=18, max=101, avg=44.49, stdev= 5.03 00:38:55.677 lat (msec): min=18, max=101, avg=44.52, stdev= 5.03 00:38:55.677 clat percentiles (msec): 00:38:55.677 | 1.00th=[ 33], 5.00th=[ 43], 10.00th=[ 44], 20.00th=[ 44], 00:38:55.677 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 45], 00:38:55.677 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 45], 95.00th=[ 46], 00:38:55.677 | 99.00th=[ 69], 99.50th=[ 73], 99.90th=[ 102], 99.95th=[ 102], 00:38:55.677 | 99.99th=[ 102] 00:38:55.677 bw ( KiB/s): min= 1282, max= 1536, per=4.12%, avg=1424.11, stdev=58.40, samples=19 00:38:55.677 iops : min= 320, max= 384, avg=356.00, stdev=14.67, samples=19 00:38:55.677 lat (msec) : 20=0.06%, 50=97.43%, 100=2.07%, 250=0.45% 00:38:55.677 cpu : usr=98.26%, sys=1.25%, ctx=13, majf=0, minf=1633 00:38:55.677 IO depths : 1=5.5%, 2=11.2%, 4=23.3%, 8=52.7%, 16=7.4%, 32=0.0%, >=64=0.0% 00:38:55.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.677 complete : 0=0.0%, 4=93.7%, 8=0.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.677 issued rwts: total=3574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:55.677 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:55.677 filename2: (groupid=0, jobs=1): err= 0: pid=68669: Sat Jul 13 22:22:14 2024 00:38:55.677 read: IOPS=357, BW=1430KiB/s (1464kB/s)(14.0MiB/10016msec) 00:38:55.677 slat (usec): min=4, max=204, avg=31.68, stdev=14.05 00:38:55.677 clat (usec): min=21448, max=81247, avg=44441.58, stdev=6014.62 00:38:55.677 lat (usec): min=21483, max=81266, avg=44473.26, stdev=6013.42 00:38:55.677 clat percentiles (usec): 00:38:55.677 | 1.00th=[23725], 5.00th=[41681], 10.00th=[43254], 20.00th=[43254], 00:38:55.677 | 30.00th=[43779], 40.00th=[43779], 50.00th=[43779], 60.00th=[44303], 00:38:55.677 | 70.00th=[44303], 80.00th=[44827], 90.00th=[45351], 95.00th=[55313], 00:38:55.677 | 99.00th=[66323], 99.50th=[69731], 99.90th=[81265], 99.95th=[81265], 00:38:55.677 | 99.99th=[81265] 00:38:55.677 bw ( KiB/s): min= 1280, max= 1536, per=4.12%, avg=1426.53, stdev=59.45, samples=19 00:38:55.677 iops : min= 320, max= 384, avg=356.63, stdev=14.86, samples=19 00:38:55.677 lat (msec) : 50=94.22%, 100=5.78% 00:38:55.677 cpu : usr=97.89%, sys=1.58%, ctx=15, majf=0, minf=1636 00:38:55.677 IO depths : 1=3.4%, 2=8.8%, 4=22.0%, 8=56.4%, 16=9.4%, 32=0.0%, >=64=0.0% 00:38:55.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.677 complete : 0=0.0%, 4=93.5%, 8=1.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.677 issued rwts: total=3580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:55.677 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:55.677 filename2: (groupid=0, jobs=1): err= 0: pid=68670: Sat Jul 13 22:22:14 2024 00:38:55.677 read: IOPS=365, BW=1463KiB/s (1498kB/s)(14.3MiB/10002msec) 00:38:55.677 slat (usec): min=11, max=109, avg=34.34, stdev=13.95 00:38:55.677 clat (msec): min=21, max=122, avg=43.47, stdev= 5.60 00:38:55.677 lat (msec): min=21, max=122, avg=43.50, stdev= 5.60 00:38:55.677 clat percentiles (msec): 00:38:55.677 | 1.00th=[ 26], 5.00th=[ 34], 10.00th=[ 43], 20.00th=[ 44], 00:38:55.678 | 30.00th=[ 44], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:38:55.678 | 70.00th=[ 45], 80.00th=[ 45], 90.00th=[ 45], 95.00th=[ 46], 00:38:55.678 | 99.00th=[ 58], 99.50th=[ 69], 99.90th=[ 97], 99.95th=[ 123], 00:38:55.678 | 99.99th=[ 123] 00:38:55.678 bw ( KiB/s): min= 1248, max= 1728, per=4.22%, avg=1459.47, stdev=99.30, samples=19 00:38:55.678 iops : min= 312, max= 432, avg=364.84, stdev=24.82, samples=19 00:38:55.678 lat (msec) : 50=97.48%, 100=2.46%, 250=0.05% 00:38:55.678 cpu : usr=98.12%, sys=1.42%, ctx=12, majf=0, minf=1636 00:38:55.678 IO depths : 1=4.8%, 2=9.7%, 4=20.2%, 8=57.1%, 16=8.3%, 32=0.0%, >=64=0.0% 00:38:55.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.678 complete : 0=0.0%, 4=92.9%, 8=2.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.678 issued rwts: total=3658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:55.678 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:55.678 filename2: (groupid=0, jobs=1): err= 0: pid=68671: Sat Jul 13 22:22:14 2024 00:38:55.678 read: IOPS=363, BW=1455KiB/s (1490kB/s)(14.2MiB/10031msec) 00:38:55.678 slat (usec): min=6, max=115, avg=62.45, stdev=16.44 00:38:55.678 clat (usec): min=11760, max=46637, avg=43411.50, stdev=2653.61 00:38:55.678 lat (usec): min=11804, max=46677, avg=43473.94, stdev=2657.07 00:38:55.678 clat percentiles (usec): 00:38:55.678 | 1.00th=[31327], 5.00th=[42206], 10.00th=[42730], 20.00th=[43254], 00:38:55.678 | 30.00th=[43254], 40.00th=[43254], 50.00th=[43779], 60.00th=[43779], 00:38:55.678 | 70.00th=[43779], 80.00th=[44303], 90.00th=[44827], 95.00th=[44827], 00:38:55.678 | 99.00th=[45876], 99.50th=[46400], 99.90th=[46400], 99.95th=[46400], 00:38:55.678 | 99.99th=[46400] 00:38:55.678 bw ( KiB/s): min= 1408, max= 1667, per=4.20%, avg=1452.95, stdev=75.60, samples=20 00:38:55.678 iops : min= 352, max= 416, avg=363.20, stdev=18.79, samples=20 00:38:55.678 lat (msec) : 20=0.44%, 50=99.56% 00:38:55.678 cpu : usr=98.06%, sys=1.40%, ctx=15, majf=0, minf=1637 00:38:55.678 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:55.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.678 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:55.678 issued rwts: total=3648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:55.678 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:55.678 00:38:55.678 Run status group 0 (all jobs): 00:38:55.678 READ: bw=33.8MiB/s (35.4MB/s), 1410KiB/s-1463KiB/s (1444kB/s-1498kB/s), io=339MiB (355MB), run=10002-10033msec 00:38:55.936 ----------------------------------------------------- 00:38:55.936 Suppressions used: 00:38:55.936 count bytes template 00:38:55.936 45 402 /usr/src/fio/parse.c 00:38:55.936 1 8 libtcmalloc_minimal.so 00:38:55.936 1 904 libcrypto.so 00:38:55.936 ----------------------------------------------------- 00:38:55.936 00:38:55.936 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:38:55.936 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:55.936 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:55.936 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:55.936 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:55.936 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:55.936 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:55.936 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:55.936 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:55.936 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:55.936 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:55.936 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:55.936 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:55.936 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:55.936 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:55.936 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:55.937 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:55.937 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:55.937 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:55.937 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:55.937 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:55.937 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:55.937 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:55.937 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:55.937 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:55.937 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:38:55.937 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:38:55.937 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:38:55.937 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:55.937 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:55.937 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:55.937 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:38:55.937 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:55.937 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:55.937 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:55.937 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:38:55.937 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:56.195 bdev_null0 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:56.195 [2024-07-13 22:22:15.358386] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:56.195 bdev_null1 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:56.195 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:56.196 { 00:38:56.196 "params": { 00:38:56.196 "name": "Nvme$subsystem", 00:38:56.196 "trtype": "$TEST_TRANSPORT", 00:38:56.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:56.196 "adrfam": "ipv4", 00:38:56.196 "trsvcid": "$NVMF_PORT", 00:38:56.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:56.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:56.196 "hdgst": ${hdgst:-false}, 00:38:56.196 "ddgst": ${ddgst:-false} 00:38:56.196 }, 00:38:56.196 "method": "bdev_nvme_attach_controller" 00:38:56.196 } 00:38:56.196 EOF 00:38:56.196 )") 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:38:56.196 { 00:38:56.196 "params": { 00:38:56.196 "name": "Nvme$subsystem", 00:38:56.196 "trtype": "$TEST_TRANSPORT", 00:38:56.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:56.196 "adrfam": "ipv4", 00:38:56.196 "trsvcid": "$NVMF_PORT", 00:38:56.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:56.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:56.196 "hdgst": ${hdgst:-false}, 00:38:56.196 "ddgst": ${ddgst:-false} 00:38:56.196 }, 00:38:56.196 "method": "bdev_nvme_attach_controller" 00:38:56.196 } 00:38:56.196 EOF 00:38:56.196 )") 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:38:56.196 "params": { 00:38:56.196 "name": "Nvme0", 00:38:56.196 "trtype": "tcp", 00:38:56.196 "traddr": "10.0.0.2", 00:38:56.196 "adrfam": "ipv4", 00:38:56.196 "trsvcid": "4420", 00:38:56.196 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:56.196 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:56.196 "hdgst": false, 00:38:56.196 "ddgst": false 00:38:56.196 }, 00:38:56.196 "method": "bdev_nvme_attach_controller" 00:38:56.196 },{ 00:38:56.196 "params": { 00:38:56.196 "name": "Nvme1", 00:38:56.196 "trtype": "tcp", 00:38:56.196 "traddr": "10.0.0.2", 00:38:56.196 "adrfam": "ipv4", 00:38:56.196 "trsvcid": "4420", 00:38:56.196 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:56.196 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:56.196 "hdgst": false, 00:38:56.196 "ddgst": false 00:38:56.196 }, 00:38:56.196 "method": "bdev_nvme_attach_controller" 00:38:56.196 }' 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:56.196 22:22:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:56.454 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:56.454 ... 00:38:56.454 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:56.454 ... 00:38:56.454 fio-3.35 00:38:56.454 Starting 4 threads 00:38:56.454 EAL: No free 2048 kB hugepages reported on node 1 00:39:03.023 00:39:03.023 filename0: (groupid=0, jobs=1): err= 0: pid=70050: Sat Jul 13 22:22:21 2024 00:39:03.023 read: IOPS=1423, BW=11.1MiB/s (11.7MB/s)(55.6MiB/5001msec) 00:39:03.023 slat (nsec): min=5542, max=53613, avg=22229.61, stdev=6091.31 00:39:03.023 clat (usec): min=1836, max=11064, avg=5553.41, stdev=1040.12 00:39:03.023 lat (usec): min=1860, max=11083, avg=5575.64, stdev=1039.17 00:39:03.023 clat percentiles (usec): 00:39:03.023 | 1.00th=[ 3916], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4883], 00:39:03.023 | 30.00th=[ 5014], 40.00th=[ 5080], 50.00th=[ 5276], 60.00th=[ 5407], 00:39:03.023 | 70.00th=[ 5604], 80.00th=[ 5932], 90.00th=[ 7373], 95.00th=[ 7570], 00:39:03.023 | 99.00th=[ 8979], 99.50th=[ 9372], 99.90th=[10028], 99.95th=[10028], 00:39:03.023 | 99.99th=[11076] 00:39:03.023 bw ( KiB/s): min=10581, max=11904, per=24.78%, avg=11341.00, stdev=506.59, samples=9 00:39:03.023 iops : min= 1322, max= 1488, avg=1417.56, stdev=63.44, samples=9 00:39:03.023 lat (msec) : 2=0.04%, 4=1.33%, 10=98.50%, 20=0.13% 00:39:03.023 cpu : usr=94.36%, sys=4.82%, ctx=64, majf=0, minf=1634 00:39:03.023 IO depths : 1=0.1%, 2=5.1%, 4=67.6%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:03.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.023 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.023 issued rwts: total=7118,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:03.023 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:03.023 filename0: (groupid=0, jobs=1): err= 0: pid=70051: Sat Jul 13 22:22:21 2024 00:39:03.023 read: IOPS=1480, BW=11.6MiB/s (12.1MB/s)(57.9MiB/5004msec) 00:39:03.023 slat (nsec): min=4896, max=61781, avg=16637.75, stdev=6359.59 00:39:03.023 clat (usec): min=1650, max=9621, avg=5349.57, stdev=944.52 00:39:03.023 lat (usec): min=1675, max=9645, avg=5366.21, stdev=943.97 00:39:03.023 clat percentiles (usec): 00:39:03.023 | 1.00th=[ 3720], 5.00th=[ 4228], 10.00th=[ 4490], 20.00th=[ 4686], 00:39:03.023 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5145], 60.00th=[ 5342], 00:39:03.023 | 70.00th=[ 5473], 80.00th=[ 5669], 90.00th=[ 7111], 95.00th=[ 7504], 00:39:03.023 | 99.00th=[ 8029], 99.50th=[ 8586], 99.90th=[ 9503], 99.95th=[ 9503], 00:39:03.023 | 99.99th=[ 9634] 00:39:03.023 bw ( KiB/s): min=11056, max=12720, per=25.89%, avg=11848.00, stdev=440.59, samples=10 00:39:03.023 iops : min= 1382, max= 1590, avg=1481.00, stdev=55.07, samples=10 00:39:03.023 lat (msec) : 2=0.05%, 4=2.48%, 10=97.46% 00:39:03.023 cpu : usr=94.08%, sys=5.36%, ctx=12, majf=0, minf=1637 00:39:03.023 IO depths : 1=0.1%, 2=6.1%, 4=66.3%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:03.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.023 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.023 issued rwts: total=7408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:03.023 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:03.023 filename1: (groupid=0, jobs=1): err= 0: pid=70052: Sat Jul 13 22:22:21 2024 00:39:03.023 read: IOPS=1449, BW=11.3MiB/s (11.9MB/s)(56.6MiB/5002msec) 00:39:03.023 slat (nsec): min=5273, max=54030, avg=18320.23, stdev=5840.53 00:39:03.023 clat (usec): min=1422, max=10029, avg=5463.35, stdev=967.71 00:39:03.023 lat (usec): min=1442, max=10053, avg=5481.67, stdev=967.30 00:39:03.023 clat percentiles (usec): 00:39:03.023 | 1.00th=[ 3752], 5.00th=[ 4293], 10.00th=[ 4555], 20.00th=[ 4817], 00:39:03.023 | 30.00th=[ 5014], 40.00th=[ 5145], 50.00th=[ 5342], 60.00th=[ 5407], 00:39:03.023 | 70.00th=[ 5538], 80.00th=[ 5735], 90.00th=[ 7177], 95.00th=[ 7570], 00:39:03.023 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[ 9241], 99.95th=[ 9634], 00:39:03.023 | 99.99th=[10028] 00:39:03.023 bw ( KiB/s): min=10736, max=12112, per=25.27%, avg=11566.22, stdev=414.96, samples=9 00:39:03.023 iops : min= 1342, max= 1514, avg=1445.78, stdev=51.87, samples=9 00:39:03.023 lat (msec) : 2=0.06%, 4=1.42%, 10=98.51%, 20=0.01% 00:39:03.023 cpu : usr=95.00%, sys=4.40%, ctx=9, majf=0, minf=1636 00:39:03.023 IO depths : 1=0.1%, 2=4.8%, 4=67.7%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:03.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.023 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.023 issued rwts: total=7248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:03.023 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:03.023 filename1: (groupid=0, jobs=1): err= 0: pid=70053: Sat Jul 13 22:22:21 2024 00:39:03.023 read: IOPS=1370, BW=10.7MiB/s (11.2MB/s)(53.6MiB/5002msec) 00:39:03.023 slat (nsec): min=4492, max=55430, avg=17315.12, stdev=5532.83 00:39:03.023 clat (usec): min=1243, max=16623, avg=5788.92, stdev=898.40 00:39:03.023 lat (usec): min=1267, max=16640, avg=5806.23, stdev=898.19 00:39:03.023 clat percentiles (usec): 00:39:03.023 | 1.00th=[ 3687], 5.00th=[ 4752], 10.00th=[ 4948], 20.00th=[ 5145], 00:39:03.023 | 30.00th=[ 5407], 40.00th=[ 5538], 50.00th=[ 5669], 60.00th=[ 5800], 00:39:03.023 | 70.00th=[ 6063], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 7439], 00:39:03.023 | 99.00th=[ 8586], 99.50th=[ 8979], 99.90th=[14746], 99.95th=[14877], 00:39:03.023 | 99.99th=[16581] 00:39:03.023 bw ( KiB/s): min=10368, max=11648, per=24.08%, avg=11022.22, stdev=478.76, samples=9 00:39:03.023 iops : min= 1296, max= 1456, avg=1377.78, stdev=59.85, samples=9 00:39:03.023 lat (msec) : 2=0.18%, 4=1.18%, 10=98.53%, 20=0.12% 00:39:03.023 cpu : usr=94.84%, sys=4.60%, ctx=8, majf=0, minf=1639 00:39:03.023 IO depths : 1=0.1%, 2=4.2%, 4=63.7%, 8=32.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:03.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.023 complete : 0=0.0%, 4=96.1%, 8=3.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:03.023 issued rwts: total=6856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:03.023 latency : target=0, window=0, percentile=100.00%, depth=8 00:39:03.023 00:39:03.023 Run status group 0 (all jobs): 00:39:03.023 READ: bw=44.7MiB/s (46.9MB/s), 10.7MiB/s-11.6MiB/s (11.2MB/s-12.1MB/s), io=224MiB (235MB), run=5001-5004msec 00:39:03.615 ----------------------------------------------------- 00:39:03.615 Suppressions used: 00:39:03.615 count bytes template 00:39:03.615 6 52 /usr/src/fio/parse.c 00:39:03.615 1 8 libtcmalloc_minimal.so 00:39:03.615 1 904 libcrypto.so 00:39:03.615 ----------------------------------------------------- 00:39:03.615 00:39:03.615 22:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:39:03.615 22:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:39:03.615 22:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:03.615 22:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:03.615 22:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:39:03.615 22:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:03.615 22:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.615 22:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:03.882 22:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.882 22:22:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:03.882 22:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.882 22:22:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:03.882 22:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.882 22:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:39:03.882 22:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:39:03.882 22:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:39:03.882 22:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:03.882 22:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.882 22:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:03.882 22:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.882 22:22:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:39:03.882 22:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.882 22:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:03.882 22:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.882 00:39:03.882 real 0m27.977s 00:39:03.882 user 4m32.403s 00:39:03.882 sys 0m8.782s 00:39:03.882 22:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:03.882 22:22:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:39:03.882 ************************************ 00:39:03.882 END TEST fio_dif_rand_params 00:39:03.882 ************************************ 00:39:03.882 22:22:23 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:39:03.882 22:22:23 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:39:03.882 22:22:23 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:03.882 22:22:23 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:03.882 22:22:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:03.882 ************************************ 00:39:03.882 START TEST fio_dif_digest 00:39:03.882 ************************************ 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:03.882 bdev_null0 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:03.882 [2024-07-13 22:22:23.091211] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:39:03.882 { 00:39:03.882 "params": { 00:39:03.882 "name": "Nvme$subsystem", 00:39:03.882 "trtype": "$TEST_TRANSPORT", 00:39:03.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:03.882 "adrfam": "ipv4", 00:39:03.882 "trsvcid": "$NVMF_PORT", 00:39:03.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:03.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:03.882 "hdgst": ${hdgst:-false}, 00:39:03.882 "ddgst": ${ddgst:-false} 00:39:03.882 }, 00:39:03.882 "method": "bdev_nvme_attach_controller" 00:39:03.882 } 00:39:03.882 EOF 00:39:03.882 )") 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:39:03.882 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:39:03.883 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:39:03.883 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:39:03.883 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:03.883 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:39:03.883 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:39:03.883 22:22:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:39:03.883 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:39:03.883 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:39:03.883 22:22:23 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:39:03.883 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:39:03.883 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:39:03.883 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:39:03.883 22:22:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:39:03.883 22:22:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:39:03.883 22:22:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:39:03.883 "params": { 00:39:03.883 "name": "Nvme0", 00:39:03.883 "trtype": "tcp", 00:39:03.883 "traddr": "10.0.0.2", 00:39:03.883 "adrfam": "ipv4", 00:39:03.883 "trsvcid": "4420", 00:39:03.883 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:03.883 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:03.883 "hdgst": true, 00:39:03.883 "ddgst": true 00:39:03.883 }, 00:39:03.883 "method": "bdev_nvme_attach_controller" 00:39:03.883 }' 00:39:03.883 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:39:03.883 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:39:03.883 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # break 00:39:03.883 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:39:03.883 22:22:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:39:04.141 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:39:04.141 ... 00:39:04.141 fio-3.35 00:39:04.141 Starting 3 threads 00:39:04.141 EAL: No free 2048 kB hugepages reported on node 1 00:39:16.333 00:39:16.334 filename0: (groupid=0, jobs=1): err= 0: pid=71053: Sat Jul 13 22:22:34 2024 00:39:16.334 read: IOPS=183, BW=23.0MiB/s (24.1MB/s)(231MiB/10046msec) 00:39:16.334 slat (nsec): min=7838, max=77791, avg=28826.87, stdev=8393.77 00:39:16.334 clat (usec): min=9708, max=56460, avg=16276.50, stdev=2149.98 00:39:16.334 lat (usec): min=9739, max=56483, avg=16305.33, stdev=2150.09 00:39:16.334 clat percentiles (usec): 00:39:16.334 | 1.00th=[11076], 5.00th=[12387], 10.00th=[14091], 20.00th=[15139], 00:39:16.334 | 30.00th=[15664], 40.00th=[16057], 50.00th=[16450], 60.00th=[16909], 00:39:16.334 | 70.00th=[17171], 80.00th=[17433], 90.00th=[18220], 95.00th=[18744], 00:39:16.334 | 99.00th=[19792], 99.50th=[20579], 99.90th=[49021], 99.95th=[56361], 00:39:16.334 | 99.99th=[56361] 00:39:16.334 bw ( KiB/s): min=21760, max=25600, per=36.04%, avg=23590.40, stdev=872.20, samples=20 00:39:16.334 iops : min= 170, max= 200, avg=184.30, stdev= 6.81, samples=20 00:39:16.334 lat (msec) : 10=0.05%, 20=99.24%, 50=0.65%, 100=0.05% 00:39:16.334 cpu : usr=90.07%, sys=8.05%, ctx=271, majf=0, minf=1640 00:39:16.334 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:16.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.334 issued rwts: total=1845,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:16.334 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:16.334 filename0: (groupid=0, jobs=1): err= 0: pid=71054: Sat Jul 13 22:22:34 2024 00:39:16.334 read: IOPS=163, BW=20.4MiB/s (21.4MB/s)(205MiB/10050msec) 00:39:16.334 slat (nsec): min=7398, max=58300, avg=24857.97, stdev=6975.59 00:39:16.334 clat (usec): min=11371, max=62299, avg=18336.37, stdev=5378.83 00:39:16.334 lat (usec): min=11392, max=62318, avg=18361.23, stdev=5378.38 00:39:16.334 clat percentiles (usec): 00:39:16.334 | 1.00th=[12256], 5.00th=[15139], 10.00th=[15926], 20.00th=[16712], 00:39:16.334 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17695], 60.00th=[18220], 00:39:16.334 | 70.00th=[18482], 80.00th=[19006], 90.00th=[19530], 95.00th=[20317], 00:39:16.334 | 99.00th=[58983], 99.50th=[60031], 99.90th=[61080], 99.95th=[62129], 00:39:16.334 | 99.99th=[62129] 00:39:16.334 bw ( KiB/s): min=18176, max=23552, per=32.02%, avg=20955.65, stdev=1295.14, samples=20 00:39:16.334 iops : min= 142, max= 184, avg=163.70, stdev=10.12, samples=20 00:39:16.334 lat (msec) : 20=93.90%, 50=4.51%, 100=1.59% 00:39:16.334 cpu : usr=92.73%, sys=6.65%, ctx=26, majf=0, minf=1639 00:39:16.334 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:16.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.334 issued rwts: total=1639,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:16.334 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:16.334 filename0: (groupid=0, jobs=1): err= 0: pid=71055: Sat Jul 13 22:22:34 2024 00:39:16.334 read: IOPS=164, BW=20.6MiB/s (21.6MB/s)(207MiB/10045msec) 00:39:16.334 slat (nsec): min=6284, max=57230, avg=22491.46, stdev=5657.57 00:39:16.334 clat (usec): min=10886, max=61190, avg=18178.99, stdev=5202.38 00:39:16.334 lat (usec): min=10924, max=61210, avg=18201.48, stdev=5202.51 00:39:16.334 clat percentiles (usec): 00:39:16.334 | 1.00th=[12518], 5.00th=[14615], 10.00th=[15664], 20.00th=[16450], 00:39:16.334 | 30.00th=[16909], 40.00th=[17433], 50.00th=[17695], 60.00th=[17957], 00:39:16.334 | 70.00th=[18482], 80.00th=[18744], 90.00th=[19530], 95.00th=[20055], 00:39:16.334 | 99.00th=[58459], 99.50th=[59507], 99.90th=[61080], 99.95th=[61080], 00:39:16.334 | 99.99th=[61080] 00:39:16.334 bw ( KiB/s): min=17664, max=23040, per=32.31%, avg=21145.60, stdev=1429.94, samples=20 00:39:16.334 iops : min= 138, max= 180, avg=165.20, stdev=11.17, samples=20 00:39:16.334 lat (msec) : 20=93.72%, 50=4.89%, 100=1.39% 00:39:16.334 cpu : usr=93.52%, sys=5.89%, ctx=19, majf=0, minf=1635 00:39:16.334 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:16.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:16.334 issued rwts: total=1655,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:16.334 latency : target=0, window=0, percentile=100.00%, depth=3 00:39:16.334 00:39:16.334 Run status group 0 (all jobs): 00:39:16.334 READ: bw=63.9MiB/s (67.0MB/s), 20.4MiB/s-23.0MiB/s (21.4MB/s-24.1MB/s), io=642MiB (674MB), run=10045-10050msec 00:39:16.334 ----------------------------------------------------- 00:39:16.334 Suppressions used: 00:39:16.334 count bytes template 00:39:16.334 5 44 /usr/src/fio/parse.c 00:39:16.334 1 8 libtcmalloc_minimal.so 00:39:16.334 1 904 libcrypto.so 00:39:16.334 ----------------------------------------------------- 00:39:16.334 00:39:16.334 22:22:35 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:39:16.334 22:22:35 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:39:16.334 22:22:35 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:39:16.334 22:22:35 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:39:16.334 22:22:35 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:39:16.334 22:22:35 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:16.334 22:22:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.334 22:22:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:16.334 22:22:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.334 22:22:35 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:39:16.334 22:22:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:16.334 22:22:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:16.334 22:22:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:16.334 00:39:16.334 real 0m12.386s 00:39:16.334 user 0m30.025s 00:39:16.334 sys 0m2.516s 00:39:16.334 22:22:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:16.334 22:22:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:39:16.334 ************************************ 00:39:16.334 END TEST fio_dif_digest 00:39:16.334 ************************************ 00:39:16.334 22:22:35 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:39:16.334 22:22:35 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:39:16.334 22:22:35 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:39:16.334 22:22:35 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:16.334 22:22:35 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:39:16.334 22:22:35 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:16.334 22:22:35 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:39:16.334 22:22:35 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:16.334 22:22:35 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:16.334 rmmod nvme_tcp 00:39:16.334 rmmod nvme_fabrics 00:39:16.334 rmmod nvme_keyring 00:39:16.334 22:22:35 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:16.334 22:22:35 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:39:16.334 22:22:35 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:39:16.335 22:22:35 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 64271 ']' 00:39:16.335 22:22:35 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 64271 00:39:16.335 22:22:35 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 64271 ']' 00:39:16.335 22:22:35 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 64271 00:39:16.335 22:22:35 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:39:16.335 22:22:35 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:16.335 22:22:35 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64271 00:39:16.335 22:22:35 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:16.335 22:22:35 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:16.335 22:22:35 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64271' 00:39:16.335 killing process with pid 64271 00:39:16.335 22:22:35 nvmf_dif -- common/autotest_common.sh@967 -- # kill 64271 00:39:16.335 22:22:35 nvmf_dif -- common/autotest_common.sh@972 -- # wait 64271 00:39:17.708 22:22:36 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:39:17.708 22:22:36 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:18.641 Waiting for block devices as requested 00:39:18.641 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:39:18.899 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:18.899 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:19.157 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:19.157 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:19.157 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:19.157 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:19.415 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:19.415 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:19.415 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:19.415 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:19.672 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:19.672 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:19.672 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:19.672 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:19.930 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:19.930 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:19.930 22:22:39 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:19.930 22:22:39 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:19.930 22:22:39 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:19.930 22:22:39 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:19.930 22:22:39 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:20.187 22:22:39 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:20.187 22:22:39 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:22.089 22:22:41 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:39:22.089 00:39:22.089 real 1m15.805s 00:39:22.089 user 6m41.269s 00:39:22.089 sys 0m20.676s 00:39:22.089 22:22:41 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:22.089 22:22:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:39:22.089 ************************************ 00:39:22.089 END TEST nvmf_dif 00:39:22.089 ************************************ 00:39:22.089 22:22:41 -- common/autotest_common.sh@1142 -- # return 0 00:39:22.089 22:22:41 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:22.089 22:22:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:22.089 22:22:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:22.089 22:22:41 -- common/autotest_common.sh@10 -- # set +x 00:39:22.089 ************************************ 00:39:22.089 START TEST nvmf_abort_qd_sizes 00:39:22.089 ************************************ 00:39:22.089 22:22:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:39:22.089 * Looking for test storage... 00:39:22.089 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:22.089 22:22:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:39:22.090 22:22:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:24.002 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:24.002 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:24.002 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:24.002 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:24.003 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:24.003 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:39:24.003 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:39:24.003 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:39:24.003 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:39:24.003 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:39:24.003 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:39:24.003 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:24.003 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:24.003 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:24.003 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:39:24.003 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:24.003 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:24.003 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:39:24.003 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:24.003 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:24.003 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:39:24.003 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:39:24.003 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:39:24.003 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:24.003 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:24.003 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:24.003 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:39:24.003 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:24.003 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:24.003 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:24.003 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:39:24.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:24.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:39:24.003 00:39:24.003 --- 10.0.0.2 ping statistics --- 00:39:24.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:24.003 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:39:24.003 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:24.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:24.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:39:24.269 00:39:24.269 --- 10.0.0.1 ping statistics --- 00:39:24.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:24.269 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:39:24.269 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:24.269 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:39:24.269 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:39:24.269 22:22:43 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:25.201 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:39:25.201 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:39:25.201 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:39:25.201 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:39:25.201 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:39:25.201 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:39:25.201 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:39:25.201 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:39:25.201 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:39:25.201 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:39:25.201 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:39:25.201 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:39:25.458 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:39:25.458 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:39:25.458 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:39:25.458 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:39:26.394 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:39:26.394 22:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:26.394 22:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:39:26.394 22:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:39:26.394 22:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:26.394 22:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:39:26.394 22:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:39:26.394 22:22:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:39:26.394 22:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:39:26.394 22:22:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:39:26.394 22:22:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:26.394 22:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=76089 00:39:26.394 22:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:39:26.394 22:22:45 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 76089 00:39:26.394 22:22:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 76089 ']' 00:39:26.394 22:22:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:26.394 22:22:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:26.394 22:22:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:26.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:26.394 22:22:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:26.394 22:22:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:26.394 [2024-07-13 22:22:45.748032] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:26.394 [2024-07-13 22:22:45.748178] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:26.652 EAL: No free 2048 kB hugepages reported on node 1 00:39:26.652 [2024-07-13 22:22:45.890155] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:26.911 [2024-07-13 22:22:46.155209] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:26.911 [2024-07-13 22:22:46.155276] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:26.911 [2024-07-13 22:22:46.155304] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:26.911 [2024-07-13 22:22:46.155330] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:26.911 [2024-07-13 22:22:46.155354] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:26.911 [2024-07-13 22:22:46.155477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:26.911 [2024-07-13 22:22:46.155557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:26.911 [2024-07-13 22:22:46.155636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:26.911 [2024-07-13 22:22:46.155643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:39:27.505 22:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:27.505 22:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:39:27.505 22:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:27.505 22:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:39:27.505 22:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:27.505 22:22:46 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:27.505 22:22:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:39:27.505 22:22:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:39:27.505 22:22:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:39:27.505 22:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:39:27.505 22:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:39:27.505 22:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:39:27.505 22:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:39:27.505 22:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:39:27.505 22:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:39:27.505 22:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:39:27.505 22:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:39:27.505 22:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:39:27.505 22:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:39:27.505 22:22:46 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:39:27.505 22:22:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:39:27.505 22:22:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:39:27.505 22:22:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:39:27.505 22:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:27.505 22:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:27.505 22:22:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:27.505 ************************************ 00:39:27.505 START TEST spdk_target_abort 00:39:27.505 ************************************ 00:39:27.505 22:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:39:27.505 22:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:39:27.505 22:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:39:27.505 22:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:27.505 22:22:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:30.788 spdk_targetn1 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:30.788 [2024-07-13 22:22:49.583427] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:30.788 [2024-07-13 22:22:49.629381] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:30.788 22:22:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:30.788 EAL: No free 2048 kB hugepages reported on node 1 00:39:34.070 Initializing NVMe Controllers 00:39:34.070 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:34.070 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:34.070 Initialization complete. Launching workers. 00:39:34.070 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8504, failed: 0 00:39:34.070 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1273, failed to submit 7231 00:39:34.070 success 776, unsuccess 497, failed 0 00:39:34.070 22:22:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:34.070 22:22:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:34.070 EAL: No free 2048 kB hugepages reported on node 1 00:39:37.349 Initializing NVMe Controllers 00:39:37.349 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:37.349 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:37.349 Initialization complete. Launching workers. 00:39:37.349 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8515, failed: 0 00:39:37.349 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1239, failed to submit 7276 00:39:37.349 success 298, unsuccess 941, failed 0 00:39:37.350 22:22:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:37.350 22:22:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:37.350 EAL: No free 2048 kB hugepages reported on node 1 00:39:40.634 Initializing NVMe Controllers 00:39:40.634 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:39:40.634 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:40.634 Initialization complete. Launching workers. 00:39:40.634 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27261, failed: 0 00:39:40.634 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2733, failed to submit 24528 00:39:40.634 success 223, unsuccess 2510, failed 0 00:39:40.634 22:22:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:39:40.634 22:22:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:40.634 22:22:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:40.634 22:22:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:40.634 22:22:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:39:40.634 22:22:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:40.634 22:22:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:42.008 22:23:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.008 22:23:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 76089 00:39:42.008 22:23:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 76089 ']' 00:39:42.008 22:23:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 76089 00:39:42.008 22:23:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:39:42.008 22:23:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:42.008 22:23:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76089 00:39:42.008 22:23:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:42.008 22:23:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:42.008 22:23:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76089' 00:39:42.008 killing process with pid 76089 00:39:42.008 22:23:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 76089 00:39:42.008 22:23:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 76089 00:39:42.944 00:39:42.944 real 0m15.611s 00:39:42.944 user 0m59.762s 00:39:42.944 sys 0m2.809s 00:39:42.944 22:23:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:42.944 22:23:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:42.944 ************************************ 00:39:42.944 END TEST spdk_target_abort 00:39:42.944 ************************************ 00:39:42.944 22:23:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:39:42.944 22:23:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:39:42.944 22:23:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:42.944 22:23:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:42.944 22:23:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:43.203 ************************************ 00:39:43.203 START TEST kernel_target_abort 00:39:43.203 ************************************ 00:39:43.203 22:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:39:43.203 22:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:39:43.203 22:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:39:43.203 22:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:39:43.203 22:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:39:43.203 22:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:39:43.203 22:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:39:43.203 22:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:39:43.203 22:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:39:43.203 22:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:39:43.203 22:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:39:43.203 22:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:39:43.203 22:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:39:43.203 22:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:39:43.203 22:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:39:43.203 22:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:43.203 22:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:43.203 22:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:39:43.203 22:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:39:43.203 22:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:39:43.203 22:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:39:43.203 22:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:39:43.203 22:23:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:44.137 Waiting for block devices as requested 00:39:44.137 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:39:44.137 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:44.395 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:44.395 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:44.395 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:44.653 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:44.653 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:44.653 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:44.653 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:44.911 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:44.911 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:44.911 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:44.911 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:45.169 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:45.169 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:45.169 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:45.169 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:45.735 22:23:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:39:45.735 22:23:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:39:45.735 22:23:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:39:45.735 22:23:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:39:45.735 22:23:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:45.735 22:23:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:39:45.735 22:23:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:39:45.735 22:23:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:39:45.735 22:23:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:39:45.735 No valid GPT data, bailing 00:39:45.735 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:39:45.735 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:39:45.735 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:39:45.735 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:39:45.735 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:39:45.735 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:45.736 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:45.736 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:39:45.736 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:39:45.736 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:39:45.736 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:39:45.736 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:39:45.736 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:39:45.736 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:39:45.736 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:39:45.736 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:39:45.736 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:39:45.736 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:39:45.994 00:39:45.994 Discovery Log Number of Records 2, Generation counter 2 00:39:45.994 =====Discovery Log Entry 0====== 00:39:45.994 trtype: tcp 00:39:45.994 adrfam: ipv4 00:39:45.994 subtype: current discovery subsystem 00:39:45.994 treq: not specified, sq flow control disable supported 00:39:45.994 portid: 1 00:39:45.994 trsvcid: 4420 00:39:45.994 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:39:45.994 traddr: 10.0.0.1 00:39:45.994 eflags: none 00:39:45.994 sectype: none 00:39:45.994 =====Discovery Log Entry 1====== 00:39:45.994 trtype: tcp 00:39:45.994 adrfam: ipv4 00:39:45.994 subtype: nvme subsystem 00:39:45.994 treq: not specified, sq flow control disable supported 00:39:45.994 portid: 1 00:39:45.994 trsvcid: 4420 00:39:45.994 subnqn: nqn.2016-06.io.spdk:testnqn 00:39:45.994 traddr: 10.0.0.1 00:39:45.994 eflags: none 00:39:45.994 sectype: none 00:39:45.994 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:39:45.994 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:45.994 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:45.994 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:39:45.994 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:45.994 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:45.994 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:45.994 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:45.994 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:45.994 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:45.994 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:45.994 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:45.994 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:45.994 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:45.994 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:39:45.994 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:45.994 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:39:45.994 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:45.994 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:45.994 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:45.994 22:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:45.994 EAL: No free 2048 kB hugepages reported on node 1 00:39:49.306 Initializing NVMe Controllers 00:39:49.306 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:49.306 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:49.306 Initialization complete. Launching workers. 00:39:49.306 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 25234, failed: 0 00:39:49.306 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25234, failed to submit 0 00:39:49.306 success 0, unsuccess 25234, failed 0 00:39:49.306 22:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:49.306 22:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:49.306 EAL: No free 2048 kB hugepages reported on node 1 00:39:52.585 Initializing NVMe Controllers 00:39:52.585 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:52.585 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:52.585 Initialization complete. Launching workers. 00:39:52.585 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 54301, failed: 0 00:39:52.585 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13670, failed to submit 40631 00:39:52.585 success 0, unsuccess 13670, failed 0 00:39:52.585 22:23:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:52.585 22:23:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:52.585 EAL: No free 2048 kB hugepages reported on node 1 00:39:55.867 Initializing NVMe Controllers 00:39:55.867 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:55.867 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:55.867 Initialization complete. Launching workers. 00:39:55.867 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 52842, failed: 0 00:39:55.867 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 13166, failed to submit 39676 00:39:55.867 success 0, unsuccess 13166, failed 0 00:39:55.867 22:23:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:39:55.867 22:23:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:39:55.867 22:23:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:39:55.867 22:23:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:55.867 22:23:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:55.867 22:23:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:39:55.867 22:23:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:55.867 22:23:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:39:55.867 22:23:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:39:55.867 22:23:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:56.434 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:39:56.693 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:39:56.693 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:39:56.693 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:39:56.693 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:39:56.693 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:39:56.693 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:39:56.693 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:39:56.693 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:39:56.693 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:39:56.693 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:39:56.693 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:39:56.693 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:39:56.693 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:39:56.693 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:39:56.693 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:39:57.628 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:39:57.628 00:39:57.628 real 0m14.639s 00:39:57.628 user 0m5.471s 00:39:57.628 sys 0m3.514s 00:39:57.628 22:23:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:57.628 22:23:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:57.628 ************************************ 00:39:57.628 END TEST kernel_target_abort 00:39:57.628 ************************************ 00:39:57.628 22:23:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:39:57.628 22:23:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:39:57.628 22:23:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:39:57.628 22:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:57.628 22:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:39:57.628 22:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:57.628 22:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:39:57.628 22:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:57.628 22:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:57.628 rmmod nvme_tcp 00:39:57.886 rmmod nvme_fabrics 00:39:57.886 rmmod nvme_keyring 00:39:57.886 22:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:57.886 22:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:39:57.886 22:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:39:57.886 22:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 76089 ']' 00:39:57.886 22:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 76089 00:39:57.886 22:23:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 76089 ']' 00:39:57.886 22:23:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 76089 00:39:57.886 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (76089) - No such process 00:39:57.886 22:23:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 76089 is not found' 00:39:57.886 Process with pid 76089 is not found 00:39:57.886 22:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:39:57.886 22:23:17 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:58.821 Waiting for block devices as requested 00:39:58.821 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:39:59.080 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:59.080 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:59.080 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:59.339 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:59.339 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:59.339 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:59.339 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:59.598 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:39:59.598 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:39:59.598 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:39:59.598 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:39:59.598 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:39:59.857 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:39:59.857 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:39:59.857 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:39:59.857 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:40:00.116 22:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:40:00.116 22:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:40:00.116 22:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:40:00.116 22:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:40:00.116 22:23:19 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:00.116 22:23:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:00.116 22:23:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:02.019 22:23:21 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:40:02.019 00:40:02.019 real 0m39.967s 00:40:02.019 user 1m7.480s 00:40:02.019 sys 0m9.478s 00:40:02.019 22:23:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:02.019 22:23:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:40:02.019 ************************************ 00:40:02.019 END TEST nvmf_abort_qd_sizes 00:40:02.019 ************************************ 00:40:02.019 22:23:21 -- common/autotest_common.sh@1142 -- # return 0 00:40:02.019 22:23:21 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:02.019 22:23:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:02.019 22:23:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:02.019 22:23:21 -- common/autotest_common.sh@10 -- # set +x 00:40:02.277 ************************************ 00:40:02.277 START TEST keyring_file 00:40:02.277 ************************************ 00:40:02.277 22:23:21 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:40:02.277 * Looking for test storage... 00:40:02.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:02.277 22:23:21 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:02.277 22:23:21 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:02.277 22:23:21 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:02.277 22:23:21 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:02.277 22:23:21 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:02.277 22:23:21 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.277 22:23:21 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.277 22:23:21 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.277 22:23:21 keyring_file -- paths/export.sh@5 -- # export PATH 00:40:02.277 22:23:21 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@47 -- # : 0 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:02.277 22:23:21 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:02.277 22:23:21 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:02.277 22:23:21 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:02.277 22:23:21 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:40:02.277 22:23:21 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:40:02.277 22:23:21 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:40:02.277 22:23:21 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:02.277 22:23:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:02.277 22:23:21 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:02.277 22:23:21 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:02.277 22:23:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:02.277 22:23:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:02.277 22:23:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.MuPWdA9RwD 00:40:02.277 22:23:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:40:02.277 22:23:21 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:40:02.278 22:23:21 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:40:02.278 22:23:21 keyring_file -- nvmf/common.sh@705 -- # python - 00:40:02.278 22:23:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.MuPWdA9RwD 00:40:02.278 22:23:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.MuPWdA9RwD 00:40:02.278 22:23:21 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.MuPWdA9RwD 00:40:02.278 22:23:21 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:40:02.278 22:23:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:02.278 22:23:21 keyring_file -- keyring/common.sh@17 -- # name=key1 00:40:02.278 22:23:21 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:02.278 22:23:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:02.278 22:23:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:02.278 22:23:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.9C73y6mU7N 00:40:02.278 22:23:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:02.278 22:23:21 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:02.278 22:23:21 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:40:02.278 22:23:21 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:40:02.278 22:23:21 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:40:02.278 22:23:21 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:40:02.278 22:23:21 keyring_file -- nvmf/common.sh@705 -- # python - 00:40:02.278 22:23:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.9C73y6mU7N 00:40:02.278 22:23:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.9C73y6mU7N 00:40:02.278 22:23:21 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.9C73y6mU7N 00:40:02.278 22:23:21 keyring_file -- keyring/file.sh@30 -- # tgtpid=82308 00:40:02.278 22:23:21 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:02.278 22:23:21 keyring_file -- keyring/file.sh@32 -- # waitforlisten 82308 00:40:02.278 22:23:21 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 82308 ']' 00:40:02.278 22:23:21 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:02.278 22:23:21 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:02.278 22:23:21 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:02.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:02.278 22:23:21 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:02.278 22:23:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:02.536 [2024-07-13 22:23:21.677107] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:02.536 [2024-07-13 22:23:21.677268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82308 ] 00:40:02.536 EAL: No free 2048 kB hugepages reported on node 1 00:40:02.536 [2024-07-13 22:23:21.807288] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:02.794 [2024-07-13 22:23:22.062336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:03.730 22:23:22 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:03.730 22:23:22 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:40:03.730 22:23:22 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:40:03.730 22:23:22 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:03.730 22:23:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:03.730 [2024-07-13 22:23:22.961424] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:03.730 null0 00:40:03.730 [2024-07-13 22:23:22.993481] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:03.730 [2024-07-13 22:23:22.994047] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:03.730 [2024-07-13 22:23:23.001520] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:40:03.730 22:23:23 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:03.730 22:23:23 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:03.730 22:23:23 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:40:03.730 22:23:23 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:03.730 22:23:23 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:40:03.730 22:23:23 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:03.730 22:23:23 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:40:03.730 22:23:23 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:03.730 22:23:23 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:40:03.730 22:23:23 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:03.730 22:23:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:03.730 [2024-07-13 22:23:23.009523] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:40:03.730 request: 00:40:03.730 { 00:40:03.730 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:40:03.730 "secure_channel": false, 00:40:03.730 "listen_address": { 00:40:03.730 "trtype": "tcp", 00:40:03.730 "traddr": "127.0.0.1", 00:40:03.730 "trsvcid": "4420" 00:40:03.730 }, 00:40:03.730 "method": "nvmf_subsystem_add_listener", 00:40:03.730 "req_id": 1 00:40:03.730 } 00:40:03.730 Got JSON-RPC error response 00:40:03.730 response: 00:40:03.730 { 00:40:03.730 "code": -32602, 00:40:03.730 "message": "Invalid parameters" 00:40:03.730 } 00:40:03.731 22:23:23 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:40:03.731 22:23:23 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:40:03.731 22:23:23 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:03.731 22:23:23 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:03.731 22:23:23 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:03.731 22:23:23 keyring_file -- keyring/file.sh@46 -- # bperfpid=82450 00:40:03.731 22:23:23 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:40:03.731 22:23:23 keyring_file -- keyring/file.sh@48 -- # waitforlisten 82450 /var/tmp/bperf.sock 00:40:03.731 22:23:23 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 82450 ']' 00:40:03.731 22:23:23 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:03.731 22:23:23 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:03.731 22:23:23 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:03.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:03.731 22:23:23 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:03.731 22:23:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:03.731 [2024-07-13 22:23:23.090877] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:03.731 [2024-07-13 22:23:23.091050] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82450 ] 00:40:03.990 EAL: No free 2048 kB hugepages reported on node 1 00:40:03.990 [2024-07-13 22:23:23.220534] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:04.247 [2024-07-13 22:23:23.473269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:04.813 22:23:24 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:04.813 22:23:24 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:40:04.813 22:23:24 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MuPWdA9RwD 00:40:04.813 22:23:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MuPWdA9RwD 00:40:05.113 22:23:24 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.9C73y6mU7N 00:40:05.113 22:23:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.9C73y6mU7N 00:40:05.381 22:23:24 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:40:05.381 22:23:24 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:40:05.381 22:23:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:05.381 22:23:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:05.381 22:23:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:05.381 22:23:24 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.MuPWdA9RwD == \/\t\m\p\/\t\m\p\.\M\u\P\W\d\A\9\R\w\D ]] 00:40:05.381 22:23:24 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:40:05.381 22:23:24 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:40:05.381 22:23:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:05.381 22:23:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:05.381 22:23:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:05.640 22:23:25 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.9C73y6mU7N == \/\t\m\p\/\t\m\p\.\9\C\7\3\y\6\m\U\7\N ]] 00:40:05.640 22:23:25 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:40:05.640 22:23:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:05.640 22:23:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:05.640 22:23:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:05.640 22:23:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:05.640 22:23:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:05.898 22:23:25 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:40:05.898 22:23:25 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:40:05.898 22:23:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:05.898 22:23:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:05.898 22:23:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:05.898 22:23:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:05.898 22:23:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:06.156 22:23:25 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:40:06.156 22:23:25 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:06.156 22:23:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:06.416 [2024-07-13 22:23:25.710805] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:06.416 nvme0n1 00:40:06.676 22:23:25 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:40:06.676 22:23:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:06.676 22:23:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:06.676 22:23:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:06.676 22:23:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:06.676 22:23:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:06.934 22:23:26 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:40:06.934 22:23:26 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:40:06.934 22:23:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:06.935 22:23:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:06.935 22:23:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:06.935 22:23:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:06.935 22:23:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:07.193 22:23:26 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:40:07.193 22:23:26 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:07.193 Running I/O for 1 seconds... 00:40:08.126 00:40:08.126 Latency(us) 00:40:08.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:08.126 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:40:08.126 nvme0n1 : 1.03 3327.36 13.00 0.00 0.00 37834.84 11893.57 44467.39 00:40:08.126 =================================================================================================================== 00:40:08.126 Total : 3327.36 13.00 0.00 0.00 37834.84 11893.57 44467.39 00:40:08.126 0 00:40:08.126 22:23:27 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:08.126 22:23:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:08.383 22:23:27 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:40:08.383 22:23:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:08.383 22:23:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:08.383 22:23:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:08.383 22:23:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:08.383 22:23:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:08.640 22:23:27 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:40:08.640 22:23:27 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:40:08.640 22:23:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:08.640 22:23:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:08.640 22:23:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:08.640 22:23:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:08.640 22:23:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:08.899 22:23:28 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:40:08.899 22:23:28 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:08.899 22:23:28 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:40:08.899 22:23:28 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:08.899 22:23:28 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:40:08.899 22:23:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:08.899 22:23:28 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:40:08.899 22:23:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:08.899 22:23:28 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:08.899 22:23:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:40:09.157 [2024-07-13 22:23:28.476017] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:09.157 [2024-07-13 22:23:28.476047] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:09.157 [2024-07-13 22:23:28.476981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7500 (9): Bad file descriptor 00:40:09.157 [2024-07-13 22:23:28.477979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:09.157 [2024-07-13 22:23:28.478008] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:09.157 [2024-07-13 22:23:28.478028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:09.157 request: 00:40:09.157 { 00:40:09.157 "name": "nvme0", 00:40:09.157 "trtype": "tcp", 00:40:09.157 "traddr": "127.0.0.1", 00:40:09.157 "adrfam": "ipv4", 00:40:09.157 "trsvcid": "4420", 00:40:09.157 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:09.157 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:09.157 "prchk_reftag": false, 00:40:09.157 "prchk_guard": false, 00:40:09.157 "hdgst": false, 00:40:09.157 "ddgst": false, 00:40:09.157 "psk": "key1", 00:40:09.157 "method": "bdev_nvme_attach_controller", 00:40:09.157 "req_id": 1 00:40:09.157 } 00:40:09.157 Got JSON-RPC error response 00:40:09.157 response: 00:40:09.157 { 00:40:09.157 "code": -5, 00:40:09.157 "message": "Input/output error" 00:40:09.157 } 00:40:09.157 22:23:28 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:40:09.157 22:23:28 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:09.157 22:23:28 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:09.157 22:23:28 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:09.157 22:23:28 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:40:09.157 22:23:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:09.157 22:23:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:09.157 22:23:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:09.157 22:23:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:09.157 22:23:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:09.414 22:23:28 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:40:09.414 22:23:28 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:40:09.414 22:23:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:09.414 22:23:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:09.414 22:23:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:09.414 22:23:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:09.414 22:23:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:09.672 22:23:28 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:40:09.672 22:23:28 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:40:09.672 22:23:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:09.930 22:23:29 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:40:09.930 22:23:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:40:10.188 22:23:29 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:40:10.188 22:23:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:10.188 22:23:29 keyring_file -- keyring/file.sh@77 -- # jq length 00:40:10.445 22:23:29 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:40:10.445 22:23:29 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.MuPWdA9RwD 00:40:10.445 22:23:29 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.MuPWdA9RwD 00:40:10.445 22:23:29 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:40:10.445 22:23:29 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.MuPWdA9RwD 00:40:10.445 22:23:29 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:40:10.445 22:23:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:10.445 22:23:29 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:40:10.445 22:23:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:10.445 22:23:29 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MuPWdA9RwD 00:40:10.445 22:23:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MuPWdA9RwD 00:40:10.703 [2024-07-13 22:23:29.950969] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.MuPWdA9RwD': 0100660 00:40:10.703 [2024-07-13 22:23:29.951014] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:40:10.703 request: 00:40:10.703 { 00:40:10.703 "name": "key0", 00:40:10.703 "path": "/tmp/tmp.MuPWdA9RwD", 00:40:10.703 "method": "keyring_file_add_key", 00:40:10.703 "req_id": 1 00:40:10.703 } 00:40:10.703 Got JSON-RPC error response 00:40:10.703 response: 00:40:10.703 { 00:40:10.703 "code": -1, 00:40:10.703 "message": "Operation not permitted" 00:40:10.703 } 00:40:10.703 22:23:29 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:40:10.703 22:23:29 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:10.703 22:23:29 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:10.703 22:23:29 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:10.703 22:23:29 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.MuPWdA9RwD 00:40:10.703 22:23:29 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.MuPWdA9RwD 00:40:10.703 22:23:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.MuPWdA9RwD 00:40:10.960 22:23:30 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.MuPWdA9RwD 00:40:10.960 22:23:30 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:40:10.960 22:23:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:10.960 22:23:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:10.960 22:23:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:10.960 22:23:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:10.960 22:23:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:11.217 22:23:30 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:40:11.217 22:23:30 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:11.217 22:23:30 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:40:11.217 22:23:30 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:11.217 22:23:30 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:40:11.217 22:23:30 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:11.218 22:23:30 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:40:11.218 22:23:30 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:11.218 22:23:30 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:11.218 22:23:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:11.477 [2024-07-13 22:23:30.749264] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.MuPWdA9RwD': No such file or directory 00:40:11.477 [2024-07-13 22:23:30.749316] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:40:11.477 [2024-07-13 22:23:30.749360] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:40:11.477 [2024-07-13 22:23:30.749377] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:40:11.477 [2024-07-13 22:23:30.749394] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:40:11.477 request: 00:40:11.477 { 00:40:11.477 "name": "nvme0", 00:40:11.477 "trtype": "tcp", 00:40:11.478 "traddr": "127.0.0.1", 00:40:11.478 "adrfam": "ipv4", 00:40:11.478 "trsvcid": "4420", 00:40:11.478 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:11.478 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:11.478 "prchk_reftag": false, 00:40:11.478 "prchk_guard": false, 00:40:11.478 "hdgst": false, 00:40:11.478 "ddgst": false, 00:40:11.478 "psk": "key0", 00:40:11.478 "method": "bdev_nvme_attach_controller", 00:40:11.478 "req_id": 1 00:40:11.478 } 00:40:11.478 Got JSON-RPC error response 00:40:11.478 response: 00:40:11.478 { 00:40:11.478 "code": -19, 00:40:11.478 "message": "No such device" 00:40:11.478 } 00:40:11.478 22:23:30 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:40:11.478 22:23:30 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:11.478 22:23:30 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:11.478 22:23:30 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:11.478 22:23:30 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:40:11.478 22:23:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:11.735 22:23:31 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:40:11.735 22:23:31 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:40:11.735 22:23:31 keyring_file -- keyring/common.sh@17 -- # name=key0 00:40:11.735 22:23:31 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:11.735 22:23:31 keyring_file -- keyring/common.sh@17 -- # digest=0 00:40:11.735 22:23:31 keyring_file -- keyring/common.sh@18 -- # mktemp 00:40:11.735 22:23:31 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.bQGV0k53zc 00:40:11.735 22:23:31 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:11.735 22:23:31 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:11.735 22:23:31 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:40:11.735 22:23:31 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:40:11.735 22:23:31 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:40:11.735 22:23:31 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:40:11.735 22:23:31 keyring_file -- nvmf/common.sh@705 -- # python - 00:40:11.735 22:23:31 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.bQGV0k53zc 00:40:11.735 22:23:31 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.bQGV0k53zc 00:40:11.735 22:23:31 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.bQGV0k53zc 00:40:11.735 22:23:31 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bQGV0k53zc 00:40:11.735 22:23:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bQGV0k53zc 00:40:11.993 22:23:31 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:11.993 22:23:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:12.250 nvme0n1 00:40:12.250 22:23:31 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:40:12.250 22:23:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:12.250 22:23:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:12.251 22:23:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:12.251 22:23:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:12.251 22:23:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:12.509 22:23:31 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:40:12.509 22:23:31 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:40:12.509 22:23:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:40:12.767 22:23:32 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:40:12.768 22:23:32 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:40:12.768 22:23:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:12.768 22:23:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:12.768 22:23:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:13.026 22:23:32 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:40:13.026 22:23:32 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:40:13.026 22:23:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:13.026 22:23:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:13.026 22:23:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:13.026 22:23:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:13.026 22:23:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:13.283 22:23:32 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:40:13.283 22:23:32 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:13.283 22:23:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:13.542 22:23:32 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:40:13.543 22:23:32 keyring_file -- keyring/file.sh@104 -- # jq length 00:40:13.543 22:23:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:13.801 22:23:33 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:40:13.801 22:23:33 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bQGV0k53zc 00:40:13.801 22:23:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bQGV0k53zc 00:40:14.058 22:23:33 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.9C73y6mU7N 00:40:14.058 22:23:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.9C73y6mU7N 00:40:14.316 22:23:33 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:14.316 22:23:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:40:14.574 nvme0n1 00:40:14.574 22:23:33 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:40:14.574 22:23:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:40:14.832 22:23:34 keyring_file -- keyring/file.sh@112 -- # config='{ 00:40:14.832 "subsystems": [ 00:40:14.832 { 00:40:14.832 "subsystem": "keyring", 00:40:14.832 "config": [ 00:40:14.832 { 00:40:14.832 "method": "keyring_file_add_key", 00:40:14.832 "params": { 00:40:14.832 "name": "key0", 00:40:14.832 "path": "/tmp/tmp.bQGV0k53zc" 00:40:14.832 } 00:40:14.832 }, 00:40:14.832 { 00:40:14.832 "method": "keyring_file_add_key", 00:40:14.832 "params": { 00:40:14.832 "name": "key1", 00:40:14.832 "path": "/tmp/tmp.9C73y6mU7N" 00:40:14.832 } 00:40:14.832 } 00:40:14.832 ] 00:40:14.832 }, 00:40:14.832 { 00:40:14.832 "subsystem": "iobuf", 00:40:14.832 "config": [ 00:40:14.832 { 00:40:14.832 "method": "iobuf_set_options", 00:40:14.832 "params": { 00:40:14.832 "small_pool_count": 8192, 00:40:14.832 "large_pool_count": 1024, 00:40:14.832 "small_bufsize": 8192, 00:40:14.832 "large_bufsize": 135168 00:40:14.832 } 00:40:14.832 } 00:40:14.832 ] 00:40:14.832 }, 00:40:14.832 { 00:40:14.832 "subsystem": "sock", 00:40:14.832 "config": [ 00:40:14.832 { 00:40:14.832 "method": "sock_set_default_impl", 00:40:14.832 "params": { 00:40:14.832 "impl_name": "posix" 00:40:14.832 } 00:40:14.832 }, 00:40:14.832 { 00:40:14.832 "method": "sock_impl_set_options", 00:40:14.832 "params": { 00:40:14.832 "impl_name": "ssl", 00:40:14.832 "recv_buf_size": 4096, 00:40:14.832 "send_buf_size": 4096, 00:40:14.832 "enable_recv_pipe": true, 00:40:14.832 "enable_quickack": false, 00:40:14.832 "enable_placement_id": 0, 00:40:14.832 "enable_zerocopy_send_server": true, 00:40:14.832 "enable_zerocopy_send_client": false, 00:40:14.832 "zerocopy_threshold": 0, 00:40:14.832 "tls_version": 0, 00:40:14.832 "enable_ktls": false 00:40:14.832 } 00:40:14.832 }, 00:40:14.832 { 00:40:14.832 "method": "sock_impl_set_options", 00:40:14.832 "params": { 00:40:14.832 "impl_name": "posix", 00:40:14.832 "recv_buf_size": 2097152, 00:40:14.832 "send_buf_size": 2097152, 00:40:14.832 "enable_recv_pipe": true, 00:40:14.832 "enable_quickack": false, 00:40:14.832 "enable_placement_id": 0, 00:40:14.832 "enable_zerocopy_send_server": true, 00:40:14.832 "enable_zerocopy_send_client": false, 00:40:14.832 "zerocopy_threshold": 0, 00:40:14.832 "tls_version": 0, 00:40:14.832 "enable_ktls": false 00:40:14.832 } 00:40:14.832 } 00:40:14.832 ] 00:40:14.832 }, 00:40:14.832 { 00:40:14.832 "subsystem": "vmd", 00:40:14.832 "config": [] 00:40:14.832 }, 00:40:14.832 { 00:40:14.832 "subsystem": "accel", 00:40:14.832 "config": [ 00:40:14.832 { 00:40:14.832 "method": "accel_set_options", 00:40:14.832 "params": { 00:40:14.832 "small_cache_size": 128, 00:40:14.832 "large_cache_size": 16, 00:40:14.832 "task_count": 2048, 00:40:14.832 "sequence_count": 2048, 00:40:14.832 "buf_count": 2048 00:40:14.832 } 00:40:14.832 } 00:40:14.832 ] 00:40:14.832 }, 00:40:14.832 { 00:40:14.832 "subsystem": "bdev", 00:40:14.832 "config": [ 00:40:14.832 { 00:40:14.832 "method": "bdev_set_options", 00:40:14.832 "params": { 00:40:14.832 "bdev_io_pool_size": 65535, 00:40:14.832 "bdev_io_cache_size": 256, 00:40:14.832 "bdev_auto_examine": true, 00:40:14.832 "iobuf_small_cache_size": 128, 00:40:14.832 "iobuf_large_cache_size": 16 00:40:14.832 } 00:40:14.832 }, 00:40:14.832 { 00:40:14.832 "method": "bdev_raid_set_options", 00:40:14.832 "params": { 00:40:14.832 "process_window_size_kb": 1024 00:40:14.832 } 00:40:14.832 }, 00:40:14.832 { 00:40:14.832 "method": "bdev_iscsi_set_options", 00:40:14.832 "params": { 00:40:14.832 "timeout_sec": 30 00:40:14.832 } 00:40:14.832 }, 00:40:14.832 { 00:40:14.832 "method": "bdev_nvme_set_options", 00:40:14.832 "params": { 00:40:14.832 "action_on_timeout": "none", 00:40:14.832 "timeout_us": 0, 00:40:14.832 "timeout_admin_us": 0, 00:40:14.832 "keep_alive_timeout_ms": 10000, 00:40:14.832 "arbitration_burst": 0, 00:40:14.832 "low_priority_weight": 0, 00:40:14.832 "medium_priority_weight": 0, 00:40:14.832 "high_priority_weight": 0, 00:40:14.832 "nvme_adminq_poll_period_us": 10000, 00:40:14.832 "nvme_ioq_poll_period_us": 0, 00:40:14.832 "io_queue_requests": 512, 00:40:14.832 "delay_cmd_submit": true, 00:40:14.833 "transport_retry_count": 4, 00:40:14.833 "bdev_retry_count": 3, 00:40:14.833 "transport_ack_timeout": 0, 00:40:14.833 "ctrlr_loss_timeout_sec": 0, 00:40:14.833 "reconnect_delay_sec": 0, 00:40:14.833 "fast_io_fail_timeout_sec": 0, 00:40:14.833 "disable_auto_failback": false, 00:40:14.833 "generate_uuids": false, 00:40:14.833 "transport_tos": 0, 00:40:14.833 "nvme_error_stat": false, 00:40:14.833 "rdma_srq_size": 0, 00:40:14.833 "io_path_stat": false, 00:40:14.833 "allow_accel_sequence": false, 00:40:14.833 "rdma_max_cq_size": 0, 00:40:14.833 "rdma_cm_event_timeout_ms": 0, 00:40:14.833 "dhchap_digests": [ 00:40:14.833 "sha256", 00:40:14.833 "sha384", 00:40:14.833 "sha512" 00:40:14.833 ], 00:40:14.833 "dhchap_dhgroups": [ 00:40:14.833 "null", 00:40:14.833 "ffdhe2048", 00:40:14.833 "ffdhe3072", 00:40:14.833 "ffdhe4096", 00:40:14.833 "ffdhe6144", 00:40:14.833 "ffdhe8192" 00:40:14.833 ] 00:40:14.833 } 00:40:14.833 }, 00:40:14.833 { 00:40:14.833 "method": "bdev_nvme_attach_controller", 00:40:14.833 "params": { 00:40:14.833 "name": "nvme0", 00:40:14.833 "trtype": "TCP", 00:40:14.833 "adrfam": "IPv4", 00:40:14.833 "traddr": "127.0.0.1", 00:40:14.833 "trsvcid": "4420", 00:40:14.833 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:14.833 "prchk_reftag": false, 00:40:14.833 "prchk_guard": false, 00:40:14.833 "ctrlr_loss_timeout_sec": 0, 00:40:14.833 "reconnect_delay_sec": 0, 00:40:14.833 "fast_io_fail_timeout_sec": 0, 00:40:14.833 "psk": "key0", 00:40:14.833 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:14.833 "hdgst": false, 00:40:14.833 "ddgst": false 00:40:14.833 } 00:40:14.833 }, 00:40:14.833 { 00:40:14.833 "method": "bdev_nvme_set_hotplug", 00:40:14.833 "params": { 00:40:14.833 "period_us": 100000, 00:40:14.833 "enable": false 00:40:14.833 } 00:40:14.833 }, 00:40:14.833 { 00:40:14.833 "method": "bdev_wait_for_examine" 00:40:14.833 } 00:40:14.833 ] 00:40:14.833 }, 00:40:14.833 { 00:40:14.833 "subsystem": "nbd", 00:40:14.833 "config": [] 00:40:14.833 } 00:40:14.833 ] 00:40:14.833 }' 00:40:14.833 22:23:34 keyring_file -- keyring/file.sh@114 -- # killprocess 82450 00:40:14.833 22:23:34 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 82450 ']' 00:40:14.833 22:23:34 keyring_file -- common/autotest_common.sh@952 -- # kill -0 82450 00:40:14.833 22:23:34 keyring_file -- common/autotest_common.sh@953 -- # uname 00:40:14.833 22:23:34 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:14.833 22:23:34 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82450 00:40:15.092 22:23:34 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:15.092 22:23:34 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:15.092 22:23:34 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82450' 00:40:15.092 killing process with pid 82450 00:40:15.092 22:23:34 keyring_file -- common/autotest_common.sh@967 -- # kill 82450 00:40:15.092 Received shutdown signal, test time was about 1.000000 seconds 00:40:15.092 00:40:15.092 Latency(us) 00:40:15.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:15.093 =================================================================================================================== 00:40:15.093 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:15.093 22:23:34 keyring_file -- common/autotest_common.sh@972 -- # wait 82450 00:40:16.029 22:23:35 keyring_file -- keyring/file.sh@117 -- # bperfpid=84062 00:40:16.029 22:23:35 keyring_file -- keyring/file.sh@119 -- # waitforlisten 84062 /var/tmp/bperf.sock 00:40:16.029 22:23:35 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 84062 ']' 00:40:16.029 22:23:35 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:16.029 22:23:35 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:40:16.029 22:23:35 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:16.029 22:23:35 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:16.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:16.029 22:23:35 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:40:16.029 "subsystems": [ 00:40:16.029 { 00:40:16.029 "subsystem": "keyring", 00:40:16.029 "config": [ 00:40:16.029 { 00:40:16.029 "method": "keyring_file_add_key", 00:40:16.029 "params": { 00:40:16.029 "name": "key0", 00:40:16.029 "path": "/tmp/tmp.bQGV0k53zc" 00:40:16.029 } 00:40:16.029 }, 00:40:16.029 { 00:40:16.029 "method": "keyring_file_add_key", 00:40:16.029 "params": { 00:40:16.029 "name": "key1", 00:40:16.029 "path": "/tmp/tmp.9C73y6mU7N" 00:40:16.029 } 00:40:16.029 } 00:40:16.029 ] 00:40:16.029 }, 00:40:16.029 { 00:40:16.029 "subsystem": "iobuf", 00:40:16.029 "config": [ 00:40:16.029 { 00:40:16.029 "method": "iobuf_set_options", 00:40:16.029 "params": { 00:40:16.029 "small_pool_count": 8192, 00:40:16.029 "large_pool_count": 1024, 00:40:16.029 "small_bufsize": 8192, 00:40:16.029 "large_bufsize": 135168 00:40:16.029 } 00:40:16.029 } 00:40:16.029 ] 00:40:16.029 }, 00:40:16.029 { 00:40:16.029 "subsystem": "sock", 00:40:16.029 "config": [ 00:40:16.029 { 00:40:16.029 "method": "sock_set_default_impl", 00:40:16.029 "params": { 00:40:16.030 "impl_name": "posix" 00:40:16.030 } 00:40:16.030 }, 00:40:16.030 { 00:40:16.030 "method": "sock_impl_set_options", 00:40:16.030 "params": { 00:40:16.030 "impl_name": "ssl", 00:40:16.030 "recv_buf_size": 4096, 00:40:16.030 "send_buf_size": 4096, 00:40:16.030 "enable_recv_pipe": true, 00:40:16.030 "enable_quickack": false, 00:40:16.030 "enable_placement_id": 0, 00:40:16.030 "enable_zerocopy_send_server": true, 00:40:16.030 "enable_zerocopy_send_client": false, 00:40:16.030 "zerocopy_threshold": 0, 00:40:16.030 "tls_version": 0, 00:40:16.030 "enable_ktls": false 00:40:16.030 } 00:40:16.030 }, 00:40:16.030 { 00:40:16.030 "method": "sock_impl_set_options", 00:40:16.030 "params": { 00:40:16.030 "impl_name": "posix", 00:40:16.030 "recv_buf_size": 2097152, 00:40:16.030 "send_buf_size": 2097152, 00:40:16.030 "enable_recv_pipe": true, 00:40:16.030 "enable_quickack": false, 00:40:16.030 "enable_placement_id": 0, 00:40:16.030 "enable_zerocopy_send_server": true, 00:40:16.030 "enable_zerocopy_send_client": false, 00:40:16.030 "zerocopy_threshold": 0, 00:40:16.030 "tls_version": 0, 00:40:16.030 "enable_ktls": false 00:40:16.030 } 00:40:16.030 } 00:40:16.030 ] 00:40:16.030 }, 00:40:16.030 { 00:40:16.030 "subsystem": "vmd", 00:40:16.030 "config": [] 00:40:16.030 }, 00:40:16.030 { 00:40:16.030 "subsystem": "accel", 00:40:16.030 "config": [ 00:40:16.030 { 00:40:16.030 "method": "accel_set_options", 00:40:16.030 "params": { 00:40:16.030 "small_cache_size": 128, 00:40:16.030 "large_cache_size": 16, 00:40:16.030 "task_count": 2048, 00:40:16.030 "sequence_count": 2048, 00:40:16.030 "buf_count": 2048 00:40:16.030 } 00:40:16.030 } 00:40:16.030 ] 00:40:16.030 }, 00:40:16.030 { 00:40:16.030 "subsystem": "bdev", 00:40:16.030 "config": [ 00:40:16.030 { 00:40:16.030 "method": "bdev_set_options", 00:40:16.030 "params": { 00:40:16.030 "bdev_io_pool_size": 65535, 00:40:16.030 "bdev_io_cache_size": 256, 00:40:16.030 "bdev_auto_examine": true, 00:40:16.030 "iobuf_small_cache_size": 128, 00:40:16.030 "iobuf_large_cache_size": 16 00:40:16.030 } 00:40:16.030 }, 00:40:16.030 { 00:40:16.030 "method": "bdev_raid_set_options", 00:40:16.030 "params": { 00:40:16.030 "process_window_size_kb": 1024 00:40:16.030 } 00:40:16.030 }, 00:40:16.030 { 00:40:16.030 "method": "bdev_iscsi_set_options", 00:40:16.030 "params": { 00:40:16.030 "timeout_sec": 30 00:40:16.030 } 00:40:16.030 }, 00:40:16.030 { 00:40:16.030 "method": "bdev_nvme_set_options", 00:40:16.030 "params": { 00:40:16.030 "action_on_timeout": "none", 00:40:16.030 "timeout_us": 0, 00:40:16.030 "timeout_admin_us": 0, 00:40:16.030 "keep_alive_timeout_ms": 10000, 00:40:16.030 "arbitration_burst": 0, 00:40:16.030 "low_priority_weight": 0, 00:40:16.030 "medium_priority_weight": 0, 00:40:16.030 "high_priority_weight": 0, 00:40:16.030 "nvme_adminq_poll_period_us": 10000, 00:40:16.030 "nvme_ioq_poll_period_us": 0, 00:40:16.030 "io_queue_requests": 512, 00:40:16.030 "delay_cmd_submit": true, 00:40:16.030 "transport_retry_count": 4, 00:40:16.030 "bdev_retry_count": 3, 00:40:16.030 "transport_ack_timeout": 0, 00:40:16.030 "ctrlr_loss_timeout_sec": 0, 00:40:16.030 "reconnect_delay_sec": 0, 00:40:16.030 "fast_io_fail_timeout_sec": 0, 00:40:16.030 "disable_auto_failback": false, 00:40:16.030 "generate_uuids": false, 00:40:16.030 "transport_tos": 0, 00:40:16.030 "nvme_error_stat": false, 00:40:16.030 "rdma_srq_size": 0, 00:40:16.030 "io_path_stat": false, 00:40:16.030 "allow_accel_sequence": false, 00:40:16.030 "rdma_max_cq_size": 0, 00:40:16.030 "rdma_cm_event_timeout_ms": 0, 00:40:16.030 "dhchap_digests": [ 00:40:16.030 "sha256", 00:40:16.030 "sha384", 00:40:16.030 "sha512" 00:40:16.030 ], 00:40:16.030 "dhchap_dhgroups": [ 00:40:16.030 "null", 00:40:16.030 "ffdhe2048", 00:40:16.030 "ffdhe3072", 00:40:16.030 "ffdhe4096", 00:40:16.030 "ffdhe6144", 00:40:16.030 "ffdhe8192" 00:40:16.030 ] 00:40:16.030 } 00:40:16.030 }, 00:40:16.030 { 00:40:16.030 "method": "bdev_nvme_attach_controller", 00:40:16.030 "params": { 00:40:16.030 "name": "nvme0", 00:40:16.030 "trtype": "TCP", 00:40:16.030 "adrfam": "IPv4", 00:40:16.030 "traddr": "127.0.0.1", 00:40:16.030 "trsvcid": "4420", 00:40:16.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:16.030 "prchk_reftag": false, 00:40:16.030 "prchk_guard": false, 00:40:16.030 "ctrlr_loss_timeout_sec": 0, 00:40:16.030 "reconnect_delay_sec": 0, 00:40:16.030 "fast_io_fail_timeout_sec": 0, 00:40:16.030 "psk": "key0", 00:40:16.030 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:16.030 "hdgst": false, 00:40:16.030 "ddgst": false 00:40:16.030 } 00:40:16.030 }, 00:40:16.030 { 00:40:16.030 "method": "bdev_nvme_set_hotplug", 00:40:16.030 "params": { 00:40:16.030 "period_us": 100000, 00:40:16.030 "enable": false 00:40:16.030 } 00:40:16.030 }, 00:40:16.030 { 00:40:16.030 "method": "bdev_wait_for_examine" 00:40:16.030 } 00:40:16.030 ] 00:40:16.030 }, 00:40:16.030 { 00:40:16.030 "subsystem": "nbd", 00:40:16.030 "config": [] 00:40:16.030 } 00:40:16.030 ] 00:40:16.030 }' 00:40:16.030 22:23:35 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:16.030 22:23:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:16.030 [2024-07-13 22:23:35.370170] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:16.030 [2024-07-13 22:23:35.370340] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84062 ] 00:40:16.290 EAL: No free 2048 kB hugepages reported on node 1 00:40:16.290 [2024-07-13 22:23:35.493595] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:16.549 [2024-07-13 22:23:35.721022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:16.807 [2024-07-13 22:23:36.128274] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:17.066 22:23:36 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:17.066 22:23:36 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:40:17.066 22:23:36 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:40:17.066 22:23:36 keyring_file -- keyring/file.sh@120 -- # jq length 00:40:17.066 22:23:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:17.324 22:23:36 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:40:17.324 22:23:36 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:40:17.324 22:23:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:40:17.324 22:23:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:17.324 22:23:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:17.324 22:23:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:17.324 22:23:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:40:17.582 22:23:36 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:40:17.582 22:23:36 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:40:17.582 22:23:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:40:17.582 22:23:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:40:17.582 22:23:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:17.582 22:23:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:17.582 22:23:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:40:17.861 22:23:37 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:40:17.862 22:23:37 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:40:17.862 22:23:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:40:17.862 22:23:37 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:40:18.138 22:23:37 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:40:18.138 22:23:37 keyring_file -- keyring/file.sh@1 -- # cleanup 00:40:18.138 22:23:37 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.bQGV0k53zc /tmp/tmp.9C73y6mU7N 00:40:18.138 22:23:37 keyring_file -- keyring/file.sh@20 -- # killprocess 84062 00:40:18.138 22:23:37 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 84062 ']' 00:40:18.138 22:23:37 keyring_file -- common/autotest_common.sh@952 -- # kill -0 84062 00:40:18.138 22:23:37 keyring_file -- common/autotest_common.sh@953 -- # uname 00:40:18.138 22:23:37 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:18.138 22:23:37 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84062 00:40:18.138 22:23:37 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:18.138 22:23:37 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:18.138 22:23:37 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84062' 00:40:18.138 killing process with pid 84062 00:40:18.138 22:23:37 keyring_file -- common/autotest_common.sh@967 -- # kill 84062 00:40:18.138 Received shutdown signal, test time was about 1.000000 seconds 00:40:18.138 00:40:18.138 Latency(us) 00:40:18.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:18.138 =================================================================================================================== 00:40:18.138 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:40:18.138 22:23:37 keyring_file -- common/autotest_common.sh@972 -- # wait 84062 00:40:19.075 22:23:38 keyring_file -- keyring/file.sh@21 -- # killprocess 82308 00:40:19.075 22:23:38 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 82308 ']' 00:40:19.075 22:23:38 keyring_file -- common/autotest_common.sh@952 -- # kill -0 82308 00:40:19.075 22:23:38 keyring_file -- common/autotest_common.sh@953 -- # uname 00:40:19.075 22:23:38 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:19.075 22:23:38 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82308 00:40:19.075 22:23:38 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:19.075 22:23:38 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:19.075 22:23:38 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82308' 00:40:19.075 killing process with pid 82308 00:40:19.075 22:23:38 keyring_file -- common/autotest_common.sh@967 -- # kill 82308 00:40:19.075 [2024-07-13 22:23:38.348770] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:40:19.075 22:23:38 keyring_file -- common/autotest_common.sh@972 -- # wait 82308 00:40:21.617 00:40:21.617 real 0m19.314s 00:40:21.617 user 0m42.245s 00:40:21.617 sys 0m3.786s 00:40:21.617 22:23:40 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:21.617 22:23:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:40:21.617 ************************************ 00:40:21.617 END TEST keyring_file 00:40:21.617 ************************************ 00:40:21.617 22:23:40 -- common/autotest_common.sh@1142 -- # return 0 00:40:21.617 22:23:40 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:40:21.618 22:23:40 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:21.618 22:23:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:21.618 22:23:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:21.618 22:23:40 -- common/autotest_common.sh@10 -- # set +x 00:40:21.618 ************************************ 00:40:21.618 START TEST keyring_linux 00:40:21.618 ************************************ 00:40:21.618 22:23:40 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:40:21.618 * Looking for test storage... 00:40:21.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:40:21.618 22:23:40 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:40:21.618 22:23:40 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:21.618 22:23:40 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:21.618 22:23:40 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:21.618 22:23:40 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:21.618 22:23:40 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:21.618 22:23:40 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:21.618 22:23:40 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:21.618 22:23:40 keyring_linux -- paths/export.sh@5 -- # export PATH 00:40:21.618 22:23:40 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:40:21.618 22:23:40 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:40:21.618 22:23:40 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:40:21.618 22:23:40 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:40:21.618 22:23:40 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:40:21.618 22:23:40 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:40:21.618 22:23:40 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:40:21.618 22:23:40 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:40:21.618 22:23:40 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:21.618 22:23:40 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:40:21.618 22:23:40 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:40:21.618 22:23:40 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:21.618 22:23:40 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:40:21.618 22:23:40 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@705 -- # python - 00:40:21.618 22:23:40 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:40:21.618 22:23:40 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:40:21.618 /tmp/:spdk-test:key0 00:40:21.618 22:23:40 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:40:21.618 22:23:40 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:40:21.618 22:23:40 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:40:21.618 22:23:40 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:40:21.618 22:23:40 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:40:21.618 22:23:40 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:40:21.618 22:23:40 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:40:21.618 22:23:40 keyring_linux -- nvmf/common.sh@705 -- # python - 00:40:21.618 22:23:40 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:40:21.618 22:23:40 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:40:21.618 /tmp/:spdk-test:key1 00:40:21.618 22:23:40 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=84813 00:40:21.618 22:23:40 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:40:21.618 22:23:40 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 84813 00:40:21.618 22:23:40 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 84813 ']' 00:40:21.618 22:23:40 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:21.618 22:23:40 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:21.618 22:23:40 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:21.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:21.618 22:23:40 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:21.618 22:23:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:21.877 [2024-07-13 22:23:41.023148] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:21.877 [2024-07-13 22:23:41.023306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84813 ] 00:40:21.877 EAL: No free 2048 kB hugepages reported on node 1 00:40:21.877 [2024-07-13 22:23:41.152396] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:22.135 [2024-07-13 22:23:41.404273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:23.071 22:23:42 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:23.071 22:23:42 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:40:23.071 22:23:42 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:40:23.071 22:23:42 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:23.071 22:23:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:23.071 [2024-07-13 22:23:42.285030] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:23.071 null0 00:40:23.071 [2024-07-13 22:23:42.317056] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:40:23.071 [2024-07-13 22:23:42.317628] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:40:23.071 22:23:42 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:23.071 22:23:42 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:40:23.071 608429020 00:40:23.071 22:23:42 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:40:23.071 604503693 00:40:23.071 22:23:42 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=84957 00:40:23.071 22:23:42 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:40:23.071 22:23:42 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 84957 /var/tmp/bperf.sock 00:40:23.071 22:23:42 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 84957 ']' 00:40:23.071 22:23:42 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:40:23.071 22:23:42 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:40:23.071 22:23:42 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:40:23.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:40:23.071 22:23:42 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:40:23.071 22:23:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:23.071 [2024-07-13 22:23:42.426149] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:23.071 [2024-07-13 22:23:42.426309] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84957 ] 00:40:23.331 EAL: No free 2048 kB hugepages reported on node 1 00:40:23.331 [2024-07-13 22:23:42.569273] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:23.592 [2024-07-13 22:23:42.808653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:24.158 22:23:43 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:40:24.158 22:23:43 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:40:24.158 22:23:43 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:40:24.158 22:23:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:40:24.416 22:23:43 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:40:24.416 22:23:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:40:24.983 22:23:44 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:24.983 22:23:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:40:24.983 [2024-07-13 22:23:44.370217] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:40:25.241 nvme0n1 00:40:25.241 22:23:44 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:40:25.241 22:23:44 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:40:25.241 22:23:44 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:25.241 22:23:44 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:25.241 22:23:44 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:25.241 22:23:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:25.499 22:23:44 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:40:25.499 22:23:44 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:25.499 22:23:44 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:40:25.499 22:23:44 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:40:25.499 22:23:44 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:40:25.499 22:23:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:25.499 22:23:44 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:40:25.759 22:23:44 keyring_linux -- keyring/linux.sh@25 -- # sn=608429020 00:40:25.759 22:23:44 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:40:25.759 22:23:44 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:25.759 22:23:44 keyring_linux -- keyring/linux.sh@26 -- # [[ 608429020 == \6\0\8\4\2\9\0\2\0 ]] 00:40:25.759 22:23:44 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 608429020 00:40:25.759 22:23:44 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:40:25.759 22:23:44 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:40:25.759 Running I/O for 1 seconds... 00:40:27.138 00:40:27.138 Latency(us) 00:40:27.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:27.138 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:40:27.138 nvme0n1 : 1.03 3532.11 13.80 0.00 0.00 35806.81 11359.57 49321.91 00:40:27.138 =================================================================================================================== 00:40:27.138 Total : 3532.11 13.80 0.00 0.00 35806.81 11359.57 49321.91 00:40:27.138 0 00:40:27.138 22:23:46 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:40:27.138 22:23:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:40:27.138 22:23:46 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:40:27.138 22:23:46 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:40:27.138 22:23:46 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:40:27.138 22:23:46 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:40:27.138 22:23:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:40:27.138 22:23:46 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:40:27.396 22:23:46 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:40:27.396 22:23:46 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:40:27.396 22:23:46 keyring_linux -- keyring/linux.sh@23 -- # return 00:40:27.396 22:23:46 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:27.396 22:23:46 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:40:27.396 22:23:46 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:27.396 22:23:46 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:40:27.396 22:23:46 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:27.396 22:23:46 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:40:27.396 22:23:46 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:27.396 22:23:46 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:27.396 22:23:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:40:27.656 [2024-07-13 22:23:46.883816] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:40:27.656 [2024-07-13 22:23:46.884361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7000 (107): Transport endpoint is not connected 00:40:27.656 [2024-07-13 22:23:46.885324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150001f7000 (9): Bad file descriptor 00:40:27.656 [2024-07-13 22:23:46.886322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:40:27.656 [2024-07-13 22:23:46.886364] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:40:27.656 [2024-07-13 22:23:46.886388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:40:27.656 request: 00:40:27.656 { 00:40:27.656 "name": "nvme0", 00:40:27.656 "trtype": "tcp", 00:40:27.656 "traddr": "127.0.0.1", 00:40:27.656 "adrfam": "ipv4", 00:40:27.656 "trsvcid": "4420", 00:40:27.656 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:27.656 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:27.656 "prchk_reftag": false, 00:40:27.656 "prchk_guard": false, 00:40:27.656 "hdgst": false, 00:40:27.656 "ddgst": false, 00:40:27.656 "psk": ":spdk-test:key1", 00:40:27.656 "method": "bdev_nvme_attach_controller", 00:40:27.656 "req_id": 1 00:40:27.656 } 00:40:27.656 Got JSON-RPC error response 00:40:27.656 response: 00:40:27.656 { 00:40:27.656 "code": -5, 00:40:27.656 "message": "Input/output error" 00:40:27.656 } 00:40:27.656 22:23:46 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:40:27.656 22:23:46 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:27.656 22:23:46 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:40:27.656 22:23:46 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:27.656 22:23:46 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:40:27.656 22:23:46 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:27.656 22:23:46 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:40:27.656 22:23:46 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:40:27.656 22:23:46 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:40:27.656 22:23:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:40:27.656 22:23:46 keyring_linux -- keyring/linux.sh@33 -- # sn=608429020 00:40:27.656 22:23:46 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 608429020 00:40:27.656 1 links removed 00:40:27.656 22:23:46 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:40:27.656 22:23:46 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:40:27.656 22:23:46 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:40:27.656 22:23:46 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:40:27.656 22:23:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:40:27.656 22:23:46 keyring_linux -- keyring/linux.sh@33 -- # sn=604503693 00:40:27.656 22:23:46 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 604503693 00:40:27.656 1 links removed 00:40:27.656 22:23:46 keyring_linux -- keyring/linux.sh@41 -- # killprocess 84957 00:40:27.656 22:23:46 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 84957 ']' 00:40:27.656 22:23:46 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 84957 00:40:27.656 22:23:46 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:40:27.656 22:23:46 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:27.656 22:23:46 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84957 00:40:27.656 22:23:46 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:40:27.656 22:23:46 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:40:27.656 22:23:46 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84957' 00:40:27.656 killing process with pid 84957 00:40:27.656 22:23:46 keyring_linux -- common/autotest_common.sh@967 -- # kill 84957 00:40:27.656 Received shutdown signal, test time was about 1.000000 seconds 00:40:27.656 00:40:27.656 Latency(us) 00:40:27.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:27.656 =================================================================================================================== 00:40:27.656 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:40:27.656 22:23:46 keyring_linux -- common/autotest_common.sh@972 -- # wait 84957 00:40:29.034 22:23:48 keyring_linux -- keyring/linux.sh@42 -- # killprocess 84813 00:40:29.034 22:23:48 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 84813 ']' 00:40:29.034 22:23:48 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 84813 00:40:29.034 22:23:48 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:40:29.034 22:23:48 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:40:29.034 22:23:48 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84813 00:40:29.034 22:23:48 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:40:29.034 22:23:48 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:40:29.034 22:23:48 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84813' 00:40:29.034 killing process with pid 84813 00:40:29.034 22:23:48 keyring_linux -- common/autotest_common.sh@967 -- # kill 84813 00:40:29.034 22:23:48 keyring_linux -- common/autotest_common.sh@972 -- # wait 84813 00:40:31.568 00:40:31.568 real 0m9.651s 00:40:31.568 user 0m15.949s 00:40:31.568 sys 0m1.813s 00:40:31.568 22:23:50 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:31.568 22:23:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:40:31.568 ************************************ 00:40:31.568 END TEST keyring_linux 00:40:31.568 ************************************ 00:40:31.568 22:23:50 -- common/autotest_common.sh@1142 -- # return 0 00:40:31.568 22:23:50 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:40:31.568 22:23:50 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:40:31.568 22:23:50 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:40:31.568 22:23:50 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:40:31.568 22:23:50 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:40:31.568 22:23:50 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:40:31.568 22:23:50 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:40:31.568 22:23:50 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:40:31.568 22:23:50 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:40:31.568 22:23:50 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:40:31.568 22:23:50 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:40:31.568 22:23:50 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:40:31.568 22:23:50 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:40:31.568 22:23:50 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:40:31.568 22:23:50 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:40:31.568 22:23:50 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:40:31.568 22:23:50 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:40:31.568 22:23:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:40:31.568 22:23:50 -- common/autotest_common.sh@10 -- # set +x 00:40:31.568 22:23:50 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:40:31.568 22:23:50 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:40:31.568 22:23:50 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:40:31.568 22:23:50 -- common/autotest_common.sh@10 -- # set +x 00:40:32.940 INFO: APP EXITING 00:40:32.940 INFO: killing all VMs 00:40:32.940 INFO: killing vhost app 00:40:32.940 INFO: EXIT DONE 00:40:34.312 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:40:34.312 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:40:34.312 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:40:34.312 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:40:34.312 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:40:34.312 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:40:34.312 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:40:34.312 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:40:34.312 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:40:34.312 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:40:34.312 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:40:34.312 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:40:34.312 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:40:34.312 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:40:34.312 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:40:34.312 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:40:34.312 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:40:35.280 Cleaning 00:40:35.280 Removing: /var/run/dpdk/spdk0/config 00:40:35.280 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:40:35.280 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:40:35.280 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:40:35.280 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:40:35.280 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:40:35.280 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:40:35.280 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:40:35.280 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:40:35.280 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:40:35.280 Removing: /var/run/dpdk/spdk0/hugepage_info 00:40:35.280 Removing: /var/run/dpdk/spdk1/config 00:40:35.280 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:40:35.280 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:40:35.280 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:40:35.280 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:40:35.280 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:40:35.280 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:40:35.280 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:40:35.280 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:40:35.538 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:40:35.538 Removing: /var/run/dpdk/spdk1/hugepage_info 00:40:35.538 Removing: /var/run/dpdk/spdk1/mp_socket 00:40:35.538 Removing: /var/run/dpdk/spdk2/config 00:40:35.538 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:40:35.538 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:40:35.538 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:40:35.538 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:40:35.538 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:40:35.538 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:40:35.538 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:40:35.538 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:40:35.538 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:40:35.538 Removing: /var/run/dpdk/spdk2/hugepage_info 00:40:35.538 Removing: /var/run/dpdk/spdk3/config 00:40:35.538 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:40:35.538 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:40:35.538 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:40:35.538 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:40:35.538 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:40:35.538 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:40:35.538 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:40:35.538 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:40:35.538 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:40:35.538 Removing: /var/run/dpdk/spdk3/hugepage_info 00:40:35.538 Removing: /var/run/dpdk/spdk4/config 00:40:35.538 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:40:35.538 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:40:35.538 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:40:35.538 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:40:35.538 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:40:35.538 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:40:35.538 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:40:35.538 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:40:35.538 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:40:35.538 Removing: /var/run/dpdk/spdk4/hugepage_info 00:40:35.538 Removing: /dev/shm/bdev_svc_trace.1 00:40:35.538 Removing: /dev/shm/nvmf_trace.0 00:40:35.538 Removing: /dev/shm/spdk_tgt_trace.pid3929780 00:40:35.538 Removing: /var/run/dpdk/spdk0 00:40:35.538 Removing: /var/run/dpdk/spdk1 00:40:35.538 Removing: /var/run/dpdk/spdk2 00:40:35.538 Removing: /var/run/dpdk/spdk3 00:40:35.538 Removing: /var/run/dpdk/spdk4 00:40:35.538 Removing: /var/run/dpdk/spdk_pid1170 00:40:35.538 Removing: /var/run/dpdk/spdk_pid13360 00:40:35.538 Removing: /var/run/dpdk/spdk_pid17073 00:40:35.538 Removing: /var/run/dpdk/spdk_pid23584 00:40:35.538 Removing: /var/run/dpdk/spdk_pid2716 00:40:35.538 Removing: /var/run/dpdk/spdk_pid28912 00:40:35.538 Removing: /var/run/dpdk/spdk_pid28917 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3465 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3926834 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3927853 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3929780 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3930529 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3931987 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3932407 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3933391 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3933530 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3934169 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3935630 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3936685 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3937280 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3937860 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3938361 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3938926 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3939211 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3939492 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3939689 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3940134 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3942774 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3943332 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3943876 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3944056 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3945377 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3945522 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3946878 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3947017 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3947570 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3947712 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3948044 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3948281 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3949314 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3949491 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3949797 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3950364 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3950528 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3950849 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3951145 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3951545 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3951840 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3952252 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3952543 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3952891 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3953272 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3953648 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3954052 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3954710 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3955258 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3955558 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3955859 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3956250 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3956544 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3956948 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3957250 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3957659 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3957950 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3958268 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3958641 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3959294 00:40:35.538 Removing: /var/run/dpdk/spdk_pid3961754 00:40:35.538 Removing: /var/run/dpdk/spdk_pid4017878 00:40:35.538 Removing: /var/run/dpdk/spdk_pid4020635 00:40:35.538 Removing: /var/run/dpdk/spdk_pid4027726 00:40:35.538 Removing: /var/run/dpdk/spdk_pid4031273 00:40:35.538 Removing: /var/run/dpdk/spdk_pid4033897 00:40:35.538 Removing: /var/run/dpdk/spdk_pid4034425 00:40:35.538 Removing: /var/run/dpdk/spdk_pid4038522 00:40:35.538 Removing: /var/run/dpdk/spdk_pid4044438 00:40:35.538 Removing: /var/run/dpdk/spdk_pid4044751 00:40:35.538 Removing: /var/run/dpdk/spdk_pid4048157 00:40:35.538 Removing: /var/run/dpdk/spdk_pid4052099 00:40:35.538 Removing: /var/run/dpdk/spdk_pid4054368 00:40:35.538 Removing: /var/run/dpdk/spdk_pid4061516 00:40:35.538 Removing: /var/run/dpdk/spdk_pid4067059 00:40:35.538 Removing: /var/run/dpdk/spdk_pid4068495 00:40:35.538 Removing: /var/run/dpdk/spdk_pid4069294 00:40:35.538 Removing: /var/run/dpdk/spdk_pid4080589 00:40:35.538 Removing: /var/run/dpdk/spdk_pid4083369 00:40:35.796 Removing: /var/run/dpdk/spdk_pid4109339 00:40:35.796 Removing: /var/run/dpdk/spdk_pid4112381 00:40:35.796 Removing: /var/run/dpdk/spdk_pid4113562 00:40:35.796 Removing: /var/run/dpdk/spdk_pid4115012 00:40:35.796 Removing: /var/run/dpdk/spdk_pid4115286 00:40:35.796 Removing: /var/run/dpdk/spdk_pid4115559 00:40:35.796 Removing: /var/run/dpdk/spdk_pid4115834 00:40:35.796 Removing: /var/run/dpdk/spdk_pid4116614 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4118068 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4119379 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4119961 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4121946 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4122673 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4123554 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4126253 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4129925 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4134190 00:40:35.797 Removing: /var/run/dpdk/spdk_pid41390 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4157960 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4161493 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4165640 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4167111 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4168721 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4171782 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4174413 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4179152 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4179159 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4182193 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4182330 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4182496 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4182852 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4182874 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4184058 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4185239 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4186438 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4187714 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4188897 00:40:35.797 Removing: /var/run/dpdk/spdk_pid4190644 00:40:35.797 Removing: /var/run/dpdk/spdk_pid42012 00:40:35.797 Removing: /var/run/dpdk/spdk_pid42604 00:40:35.797 Removing: /var/run/dpdk/spdk_pid43262 00:40:35.797 Removing: /var/run/dpdk/spdk_pid44245 00:40:35.797 Removing: /var/run/dpdk/spdk_pid44902 00:40:35.797 Removing: /var/run/dpdk/spdk_pid45450 00:40:35.797 Removing: /var/run/dpdk/spdk_pid46109 00:40:35.797 Removing: /var/run/dpdk/spdk_pid48933 00:40:35.797 Removing: /var/run/dpdk/spdk_pid49271 00:40:35.797 Removing: /var/run/dpdk/spdk_pid53321 00:40:35.797 Removing: /var/run/dpdk/spdk_pid53502 00:40:35.797 Removing: /var/run/dpdk/spdk_pid55354 00:40:35.797 Removing: /var/run/dpdk/spdk_pid61277 00:40:35.797 Removing: /var/run/dpdk/spdk_pid61409 00:40:35.797 Removing: /var/run/dpdk/spdk_pid64441 00:40:35.797 Removing: /var/run/dpdk/spdk_pid65960 00:40:35.797 Removing: /var/run/dpdk/spdk_pid67485 00:40:35.797 Removing: /var/run/dpdk/spdk_pid68343 00:40:35.797 Removing: /var/run/dpdk/spdk_pid69869 00:40:35.797 Removing: /var/run/dpdk/spdk_pid70866 00:40:35.797 Removing: /var/run/dpdk/spdk_pid7455 00:40:35.797 Removing: /var/run/dpdk/spdk_pid76523 00:40:35.797 Removing: /var/run/dpdk/spdk_pid76915 00:40:35.797 Removing: /var/run/dpdk/spdk_pid77304 00:40:35.797 Removing: /var/run/dpdk/spdk_pid775 00:40:35.797 Removing: /var/run/dpdk/spdk_pid79187 00:40:35.797 Removing: /var/run/dpdk/spdk_pid79487 00:40:35.797 Removing: /var/run/dpdk/spdk_pid79870 00:40:35.797 Removing: /var/run/dpdk/spdk_pid82308 00:40:35.797 Removing: /var/run/dpdk/spdk_pid82450 00:40:35.797 Removing: /var/run/dpdk/spdk_pid84062 00:40:35.797 Removing: /var/run/dpdk/spdk_pid84813 00:40:35.797 Removing: /var/run/dpdk/spdk_pid84957 00:40:35.797 Removing: /var/run/dpdk/spdk_pid9552 00:40:35.797 Clean 00:40:35.797 22:23:55 -- common/autotest_common.sh@1451 -- # return 0 00:40:35.797 22:23:55 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:40:35.797 22:23:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:35.797 22:23:55 -- common/autotest_common.sh@10 -- # set +x 00:40:35.797 22:23:55 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:40:35.797 22:23:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:40:35.797 22:23:55 -- common/autotest_common.sh@10 -- # set +x 00:40:35.797 22:23:55 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:35.797 22:23:55 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:40:35.797 22:23:55 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:40:35.797 22:23:55 -- spdk/autotest.sh@391 -- # hash lcov 00:40:35.797 22:23:55 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:40:35.797 22:23:55 -- spdk/autotest.sh@393 -- # hostname 00:40:35.797 22:23:55 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:40:36.055 geninfo: WARNING: invalid characters removed from testname! 00:41:02.611 22:24:21 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:06.803 22:24:25 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:09.329 22:24:28 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:12.608 22:24:31 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:15.148 22:24:34 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:17.703 22:24:37 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:41:20.985 22:24:39 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:41:20.985 22:24:39 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:20.986 22:24:39 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:41:20.986 22:24:39 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:20.986 22:24:39 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:20.986 22:24:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.986 22:24:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.986 22:24:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.986 22:24:39 -- paths/export.sh@5 -- $ export PATH 00:41:20.986 22:24:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:20.986 22:24:39 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:41:20.986 22:24:39 -- common/autobuild_common.sh@444 -- $ date +%s 00:41:20.986 22:24:39 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720902279.XXXXXX 00:41:20.986 22:24:39 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720902279.sssze7 00:41:20.986 22:24:39 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:41:20.986 22:24:39 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:41:20.986 22:24:39 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:41:20.986 22:24:39 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:41:20.986 22:24:39 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:41:20.986 22:24:39 -- common/autobuild_common.sh@460 -- $ get_config_params 00:41:20.986 22:24:39 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:41:20.986 22:24:39 -- common/autotest_common.sh@10 -- $ set +x 00:41:20.986 22:24:39 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:41:20.986 22:24:39 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:41:20.986 22:24:39 -- pm/common@17 -- $ local monitor 00:41:20.986 22:24:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:20.986 22:24:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:20.986 22:24:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:20.986 22:24:39 -- pm/common@21 -- $ date +%s 00:41:20.986 22:24:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:20.986 22:24:39 -- pm/common@21 -- $ date +%s 00:41:20.986 22:24:39 -- pm/common@25 -- $ sleep 1 00:41:20.986 22:24:39 -- pm/common@21 -- $ date +%s 00:41:20.986 22:24:39 -- pm/common@21 -- $ date +%s 00:41:20.986 22:24:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720902279 00:41:20.986 22:24:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720902279 00:41:20.986 22:24:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720902279 00:41:20.986 22:24:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720902279 00:41:20.986 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720902279_collect-vmstat.pm.log 00:41:20.986 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720902279_collect-cpu-load.pm.log 00:41:20.986 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720902279_collect-cpu-temp.pm.log 00:41:20.986 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720902279_collect-bmc-pm.bmc.pm.log 00:41:21.553 22:24:40 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:41:21.553 22:24:40 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:41:21.553 22:24:40 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:21.553 22:24:40 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:41:21.553 22:24:40 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:41:21.553 22:24:40 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:41:21.553 22:24:40 -- spdk/autopackage.sh@19 -- $ timing_finish 00:41:21.553 22:24:40 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:41:21.553 22:24:40 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:41:21.553 22:24:40 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:41:21.553 22:24:40 -- spdk/autopackage.sh@20 -- $ exit 0 00:41:21.553 22:24:40 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:41:21.553 22:24:40 -- pm/common@29 -- $ signal_monitor_resources TERM 00:41:21.553 22:24:40 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:41:21.553 22:24:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:21.553 22:24:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:41:21.553 22:24:40 -- pm/common@44 -- $ pid=98035 00:41:21.553 22:24:40 -- pm/common@50 -- $ kill -TERM 98035 00:41:21.553 22:24:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:21.553 22:24:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:41:21.553 22:24:40 -- pm/common@44 -- $ pid=98037 00:41:21.553 22:24:40 -- pm/common@50 -- $ kill -TERM 98037 00:41:21.553 22:24:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:21.553 22:24:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:41:21.553 22:24:40 -- pm/common@44 -- $ pid=98039 00:41:21.553 22:24:40 -- pm/common@50 -- $ kill -TERM 98039 00:41:21.553 22:24:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:41:21.553 22:24:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:41:21.553 22:24:40 -- pm/common@44 -- $ pid=98067 00:41:21.553 22:24:40 -- pm/common@50 -- $ sudo -E kill -TERM 98067 00:41:21.811 + [[ -n 3840553 ]] 00:41:21.811 + sudo kill 3840553 00:41:21.821 [Pipeline] } 00:41:21.837 [Pipeline] // stage 00:41:21.841 [Pipeline] } 00:41:21.855 [Pipeline] // timeout 00:41:21.859 [Pipeline] } 00:41:21.873 [Pipeline] // catchError 00:41:21.877 [Pipeline] } 00:41:21.892 [Pipeline] // wrap 00:41:21.896 [Pipeline] } 00:41:21.909 [Pipeline] // catchError 00:41:21.916 [Pipeline] stage 00:41:21.918 [Pipeline] { (Epilogue) 00:41:21.931 [Pipeline] catchError 00:41:21.932 [Pipeline] { 00:41:21.945 [Pipeline] echo 00:41:21.946 Cleanup processes 00:41:21.952 [Pipeline] sh 00:41:22.229 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:22.229 98171 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:41:22.229 98301 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:22.241 [Pipeline] sh 00:41:22.518 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:41:22.518 ++ grep -v 'sudo pgrep' 00:41:22.518 ++ awk '{print $1}' 00:41:22.518 + sudo kill -9 98171 00:41:22.529 [Pipeline] sh 00:41:22.812 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:32.807 [Pipeline] sh 00:41:33.094 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:33.094 Artifacts sizes are good 00:41:33.109 [Pipeline] archiveArtifacts 00:41:33.116 Archiving artifacts 00:41:33.341 [Pipeline] sh 00:41:33.625 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:41:33.641 [Pipeline] cleanWs 00:41:33.652 [WS-CLEANUP] Deleting project workspace... 00:41:33.652 [WS-CLEANUP] Deferred wipeout is used... 00:41:33.659 [WS-CLEANUP] done 00:41:33.661 [Pipeline] } 00:41:33.682 [Pipeline] // catchError 00:41:33.695 [Pipeline] sh 00:41:33.976 + logger -p user.info -t JENKINS-CI 00:41:33.984 [Pipeline] } 00:41:34.002 [Pipeline] // stage 00:41:34.008 [Pipeline] } 00:41:34.025 [Pipeline] // node 00:41:34.030 [Pipeline] End of Pipeline 00:41:34.065 Finished: SUCCESS